Test Report: Docker_Linux_crio_arm64 21918

                    
                      08454a179ffa60c8ae500105aac58654b5cdef58:2025-11-19:42399
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.39
35 TestAddons/parallel/Registry 16.31
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 144.33
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 6.36
41 TestAddons/parallel/CSI 48.1
42 TestAddons/parallel/Headlamp 3.57
43 TestAddons/parallel/CloudSpanner 5.46
44 TestAddons/parallel/LocalPath 8.41
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 6.29
97 TestFunctional/parallel/ServiceCmdConnect 604.3
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.86
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
135 TestFunctional/parallel/ServiceCmd/Format 0.51
136 TestFunctional/parallel/ServiceCmd/URL 0.51
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.27
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
191 TestJSONOutput/pause/Command 1.78
197 TestJSONOutput/unpause/Command 1.43
282 TestPause/serial/Pause 6.6
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.66
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.76
311 TestStartStop/group/old-k8s-version/serial/Pause 6.51
317 TestStartStop/group/no-preload/serial/Pause 7.91
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.8
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.48
333 TestStartStop/group/embed-certs/serial/Pause 7.08
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.04
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.46
349 TestStartStop/group/newest-cni/serial/Pause 6.11
x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable volcano --alsologtostderr -v=1: exit status 11 (389.858724ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:51:54.140135  869045 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:51:54.141412  869045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:51:54.141435  869045 out.go:374] Setting ErrFile to fd 2...
	I1119 21:51:54.141447  869045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:51:54.141858  869045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:51:54.142254  869045 mustload.go:66] Loading cluster: addons-441523
	I1119 21:51:54.142813  869045 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:51:54.142839  869045 addons.go:607] checking whether the cluster is paused
	I1119 21:51:54.143068  869045 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:51:54.143089  869045 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:51:54.143741  869045 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:51:54.161638  869045 ssh_runner.go:195] Run: systemctl --version
	I1119 21:51:54.161698  869045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:51:54.179669  869045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:51:54.285415  869045 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:51:54.285537  869045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:51:54.315772  869045 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:51:54.315795  869045 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:51:54.315801  869045 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:51:54.315811  869045 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:51:54.315815  869045 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:51:54.315821  869045 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:51:54.315825  869045 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:51:54.315829  869045 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:51:54.315832  869045 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:51:54.315838  869045 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:51:54.315845  869045 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:51:54.315848  869045 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:51:54.315852  869045 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:51:54.315858  869045 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:51:54.315861  869045 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:51:54.315866  869045 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:51:54.315877  869045 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:51:54.315881  869045 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:51:54.315884  869045 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:51:54.315887  869045 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:51:54.315892  869045 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:51:54.315895  869045 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:51:54.315898  869045 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:51:54.315902  869045 cri.go:89] found id: ""
	I1119 21:51:54.315958  869045 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:51:54.331390  869045 out.go:203] 
	W1119 21:51:54.334281  869045 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:51:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:51:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:51:54.334310  869045 out.go:285] * 
	* 
	W1119 21:51:54.430412  869045 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:51:54.433519  869045 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.020885ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003046512s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00390027s
addons_test.go:392: (dbg) Run:  kubectl --context addons-441523 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-441523 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-441523 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.673960572s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable registry --alsologtostderr -v=1: exit status 11 (328.450566ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:20.760291  869619 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:20.761668  869619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:20.761714  869619 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:20.761736  869619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:20.762038  869619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:20.762366  869619 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:20.762801  869619 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:20.762839  869619 addons.go:607] checking whether the cluster is paused
	I1119 21:52:20.763036  869619 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:20.763073  869619 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:20.763576  869619 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:20.785996  869619 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:20.786055  869619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:20.804879  869619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:20.905414  869619 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:20.905518  869619 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:20.945894  869619 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:20.945918  869619 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:20.945923  869619 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:20.945927  869619 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:20.945931  869619 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:20.945935  869619 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:20.945938  869619 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:20.945941  869619 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:20.945945  869619 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:20.945955  869619 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:20.945961  869619 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:20.945965  869619 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:20.945968  869619 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:20.945971  869619 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:20.945975  869619 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:20.945982  869619 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:20.945988  869619 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:20.945994  869619 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:20.945997  869619 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:20.946000  869619 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:20.946005  869619 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:20.946012  869619 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:20.946015  869619 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:20.946018  869619 cri.go:89] found id: ""
	I1119 21:52:20.946067  869619 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:20.962280  869619 out.go:203] 
	W1119 21:52:20.965169  869619 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:20.965205  869619 out.go:285] * 
	* 
	W1119 21:52:20.971681  869619 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:20.974671  869619 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.31s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.891689ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-441523
addons_test.go:332: (dbg) Run:  kubectl --context addons-441523 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (289.540571ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:53:14.813142  871701 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:53:14.814006  871701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:53:14.814047  871701 out.go:374] Setting ErrFile to fd 2...
	I1119 21:53:14.814068  871701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:53:14.814377  871701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:53:14.814720  871701 mustload.go:66] Loading cluster: addons-441523
	I1119 21:53:14.815184  871701 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:53:14.815226  871701 addons.go:607] checking whether the cluster is paused
	I1119 21:53:14.815380  871701 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:53:14.815412  871701 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:53:14.815915  871701 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:53:14.836696  871701 ssh_runner.go:195] Run: systemctl --version
	I1119 21:53:14.836746  871701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:53:14.862999  871701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:53:14.965218  871701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:53:14.965302  871701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:53:14.996327  871701 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:53:14.996348  871701 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:53:14.996365  871701 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:53:14.996370  871701 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:53:14.996374  871701 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:53:14.996378  871701 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:53:14.996382  871701 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:53:14.996385  871701 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:53:14.996389  871701 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:53:14.996396  871701 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:53:14.996428  871701 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:53:14.996440  871701 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:53:14.996445  871701 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:53:14.996449  871701 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:53:14.996452  871701 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:53:14.996464  871701 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:53:14.996471  871701 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:53:14.996476  871701 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:53:14.996480  871701 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:53:14.996483  871701 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:53:14.996488  871701 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:53:14.996492  871701 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:53:14.996495  871701 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:53:14.996497  871701 cri.go:89] found id: ""
	I1119 21:53:14.996557  871701 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:53:15.032955  871701 out.go:203] 
	W1119 21:53:15.036089  871701 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:53:15.036122  871701 out.go:285] * 
	* 
	W1119 21:53:15.042631  871701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:53:15.045883  871701 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-441523 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-441523 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-441523 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2c135613-acb3-43ff-8d73-d4ae68e57f62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2c135613-acb3-43ff-8d73-d4ae68e57f62] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.011574192s
I1119 21:52:50.500030  862175 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.493317061s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-441523 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-441523
helpers_test.go:243: (dbg) docker inspect addons-441523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b",
	        "Created": "2025-11-19T21:49:25.051412864Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 863336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T21:49:25.114693035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/hostname",
	        "HostsPath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/hosts",
	        "LogPath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b-json.log",
	        "Name": "/addons-441523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-441523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-441523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b",
	                "LowerDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-441523",
	                "Source": "/var/lib/docker/volumes/addons-441523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-441523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-441523",
	                "name.minikube.sigs.k8s.io": "addons-441523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ac53c5a3a02539119c122f79df4f7398442c00691fe5476b146461c5c6d24b2",
	            "SandboxKey": "/var/run/docker/netns/8ac53c5a3a02",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33565"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33563"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-441523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:25:51:bb:81:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9beb4116a0e41a7239b474a3998dd7108ccdf0e70a60f65f784a1ef2cc908173",
	                    "EndpointID": "ec7cdf6358e781da5f35f453674382391e27f3fa779fce6ed06a1065361e36c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-441523",
	                        "414e65357ea2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-441523 -n addons-441523
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-441523 logs -n 25: (1.462541764s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-739940                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-739940 │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ start   │ --download-only -p binary-mirror-835231 --alsologtostderr --binary-mirror http://127.0.0.1:43589 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-835231   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ delete  │ -p binary-mirror-835231                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-835231   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ addons  │ disable dashboard -p addons-441523                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ addons  │ enable dashboard -p addons-441523                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ start   │ -p addons-441523 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:51 UTC │
	│ addons  │ addons-441523 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:51 UTC │                     │
	│ addons  │ addons-441523 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ ip      │ addons-441523 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │ 19 Nov 25 21:52 UTC │
	│ addons  │ addons-441523 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ ssh     │ addons-441523 ssh cat /opt/local-path-provisioner/pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │ 19 Nov 25 21:52 UTC │
	│ addons  │ addons-441523 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ enable headlamp -p addons-441523 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ ssh     │ addons-441523 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:53 UTC │                     │
	│ addons  │ addons-441523 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:53 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-441523                                                                                                                                                                                                                                                                                                                                                                                           │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:53 UTC │ 19 Nov 25 21:53 UTC │
	│ addons  │ addons-441523 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:53 UTC │                     │
	│ ip      │ addons-441523 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:48:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:48:57.861702  862935 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:48:57.861845  862935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:57.861858  862935 out.go:374] Setting ErrFile to fd 2...
	I1119 21:48:57.861864  862935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:57.862137  862935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:48:57.862626  862935 out.go:368] Setting JSON to false
	I1119 21:48:57.863476  862935 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12667,"bootTime":1763576271,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 21:48:57.863543  862935 start.go:143] virtualization:  
	I1119 21:48:57.866957  862935 out.go:179] * [addons-441523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 21:48:57.870695  862935 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:48:57.870808  862935 notify.go:221] Checking for updates...
	I1119 21:48:57.876480  862935 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:48:57.879327  862935 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 21:48:57.882325  862935 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 21:48:57.885192  862935 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 21:48:57.888043  862935 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:48:57.891208  862935 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:48:57.920387  862935 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:48:57.920501  862935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:57.984176  862935 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:57.974953313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:57.984286  862935 docker.go:319] overlay module found
	I1119 21:48:57.987302  862935 out.go:179] * Using the docker driver based on user configuration
	I1119 21:48:57.990130  862935 start.go:309] selected driver: docker
	I1119 21:48:57.990145  862935 start.go:930] validating driver "docker" against <nil>
	I1119 21:48:57.990159  862935 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:48:57.990909  862935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:58.047623  862935 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:58.037583559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:58.047830  862935 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:48:58.048082  862935 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:48:58.051058  862935 out.go:179] * Using Docker driver with root privileges
	I1119 21:48:58.054005  862935 cni.go:84] Creating CNI manager for ""
	I1119 21:48:58.054088  862935 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:48:58.054098  862935 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:48:58.054193  862935 start.go:353] cluster config:
	{Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 21:48:58.059308  862935 out.go:179] * Starting "addons-441523" primary control-plane node in "addons-441523" cluster
	I1119 21:48:58.062186  862935 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:48:58.065101  862935 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:48:58.068060  862935 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:48:58.068111  862935 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 21:48:58.068122  862935 cache.go:65] Caching tarball of preloaded images
	I1119 21:48:58.068148  862935 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:48:58.068218  862935 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 21:48:58.068229  862935 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:48:58.068586  862935 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/config.json ...
	I1119 21:48:58.068621  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/config.json: {Name:mka57ff5f1b920d0aacbdf5cf225326ead9b2215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:48:58.084183  862935 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:48:58.084308  862935 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:48:58.084335  862935 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory, skipping pull
	I1119 21:48:58.084340  862935 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in cache, skipping pull
	I1119 21:48:58.084348  862935 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	I1119 21:48:58.084362  862935 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from local cache
	I1119 21:49:16.234792  862935 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from cached tarball
	I1119 21:49:16.234834  862935 cache.go:243] Successfully downloaded all kic artifacts
	I1119 21:49:16.234888  862935 start.go:360] acquireMachinesLock for addons-441523: {Name:mk3d2e259db7e5fa8383aeccf2ef969557fd328e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:49:16.235619  862935 start.go:364] duration metric: took 702.615µs to acquireMachinesLock for "addons-441523"
	I1119 21:49:16.235662  862935 start.go:93] Provisioning new machine with config: &{Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:49:16.235748  862935 start.go:125] createHost starting for "" (driver="docker")
	I1119 21:49:16.239348  862935 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 21:49:16.239592  862935 start.go:159] libmachine.API.Create for "addons-441523" (driver="docker")
	I1119 21:49:16.239638  862935 client.go:173] LocalClient.Create starting
	I1119 21:49:16.239748  862935 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 21:49:17.650457  862935 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 21:49:18.099507  862935 cli_runner.go:164] Run: docker network inspect addons-441523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 21:49:18.114284  862935 cli_runner.go:211] docker network inspect addons-441523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 21:49:18.114367  862935 network_create.go:284] running [docker network inspect addons-441523] to gather additional debugging logs...
	I1119 21:49:18.114390  862935 cli_runner.go:164] Run: docker network inspect addons-441523
	W1119 21:49:18.128920  862935 cli_runner.go:211] docker network inspect addons-441523 returned with exit code 1
	I1119 21:49:18.128956  862935 network_create.go:287] error running [docker network inspect addons-441523]: docker network inspect addons-441523: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-441523 not found
	I1119 21:49:18.128970  862935 network_create.go:289] output of [docker network inspect addons-441523]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-441523 not found
	
	** /stderr **
	I1119 21:49:18.129100  862935 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:49:18.145581  862935 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001979a80}
	I1119 21:49:18.145627  862935 network_create.go:124] attempt to create docker network addons-441523 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 21:49:18.145681  862935 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-441523 addons-441523
	I1119 21:49:18.200697  862935 network_create.go:108] docker network addons-441523 192.168.49.0/24 created
	I1119 21:49:18.200731  862935 kic.go:121] calculated static IP "192.168.49.2" for the "addons-441523" container
	I1119 21:49:18.200818  862935 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 21:49:18.216370  862935 cli_runner.go:164] Run: docker volume create addons-441523 --label name.minikube.sigs.k8s.io=addons-441523 --label created_by.minikube.sigs.k8s.io=true
	I1119 21:49:18.234203  862935 oci.go:103] Successfully created a docker volume addons-441523
	I1119 21:49:18.234296  862935 cli_runner.go:164] Run: docker run --rm --name addons-441523-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-441523 --entrypoint /usr/bin/test -v addons-441523:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 21:49:20.541170  862935 cli_runner.go:217] Completed: docker run --rm --name addons-441523-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-441523 --entrypoint /usr/bin/test -v addons-441523:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib: (2.306834446s)
	I1119 21:49:20.541206  862935 oci.go:107] Successfully prepared a docker volume addons-441523
	I1119 21:49:20.541262  862935 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:49:20.541272  862935 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 21:49:20.541335  862935 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-441523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 21:49:24.982056  862935 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-441523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.440681919s)
	I1119 21:49:24.982090  862935 kic.go:203] duration metric: took 4.440814319s to extract preloaded images to volume ...
	W1119 21:49:24.982228  862935 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 21:49:24.982338  862935 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 21:49:25.036233  862935 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-441523 --name addons-441523 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-441523 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-441523 --network addons-441523 --ip 192.168.49.2 --volume addons-441523:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 21:49:25.323805  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Running}}
	I1119 21:49:25.343086  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:25.366328  862935 cli_runner.go:164] Run: docker exec addons-441523 stat /var/lib/dpkg/alternatives/iptables
	I1119 21:49:25.423160  862935 oci.go:144] the created container "addons-441523" has a running status.
	I1119 21:49:25.423186  862935 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa...
	I1119 21:49:25.789346  862935 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 21:49:25.817209  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:25.837088  862935 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 21:49:25.837115  862935 kic_runner.go:114] Args: [docker exec --privileged addons-441523 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 21:49:25.877210  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:25.895233  862935 machine.go:94] provisionDockerMachine start ...
	I1119 21:49:25.895346  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:25.912624  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:25.912971  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:25.912988  862935 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:49:25.913637  862935 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 21:49:29.058400  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-441523
	
	I1119 21:49:29.058422  862935 ubuntu.go:182] provisioning hostname "addons-441523"
	I1119 21:49:29.058487  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.076452  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:29.076770  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:29.076785  862935 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-441523 && echo "addons-441523" | sudo tee /etc/hostname
	I1119 21:49:29.227827  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-441523
	
	I1119 21:49:29.227916  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.245648  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:29.245955  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:29.245977  862935 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-441523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-441523/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-441523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:49:29.387170  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:49:29.387195  862935 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 21:49:29.387225  862935 ubuntu.go:190] setting up certificates
	I1119 21:49:29.387246  862935 provision.go:84] configureAuth start
	I1119 21:49:29.387325  862935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-441523
	I1119 21:49:29.405092  862935 provision.go:143] copyHostCerts
	I1119 21:49:29.405181  862935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 21:49:29.405313  862935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 21:49:29.405393  862935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 21:49:29.405452  862935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.addons-441523 san=[127.0.0.1 192.168.49.2 addons-441523 localhost minikube]
	I1119 21:49:29.736048  862935 provision.go:177] copyRemoteCerts
	I1119 21:49:29.736124  862935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:49:29.736166  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.752732  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:29.854512  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 21:49:29.872289  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:49:29.889782  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 21:49:29.907827  862935 provision.go:87] duration metric: took 520.551136ms to configureAuth
	I1119 21:49:29.907862  862935 ubuntu.go:206] setting minikube options for container-runtime
	I1119 21:49:29.908055  862935 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:29.908165  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.925891  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:29.926210  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:29.926230  862935 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:49:30.263038  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:49:30.263064  862935 machine.go:97] duration metric: took 4.367807217s to provisionDockerMachine
	I1119 21:49:30.263089  862935 client.go:176] duration metric: took 14.02342532s to LocalClient.Create
	I1119 21:49:30.263102  862935 start.go:167] duration metric: took 14.023512616s to libmachine.API.Create "addons-441523"
	I1119 21:49:30.263112  862935 start.go:293] postStartSetup for "addons-441523" (driver="docker")
	I1119 21:49:30.263122  862935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:49:30.263193  862935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:49:30.263242  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.283088  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.383232  862935 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:49:30.386507  862935 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 21:49:30.386534  862935 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 21:49:30.386545  862935 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 21:49:30.386614  862935 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 21:49:30.386636  862935 start.go:296] duration metric: took 123.517497ms for postStartSetup
	I1119 21:49:30.386975  862935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-441523
	I1119 21:49:30.403526  862935 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/config.json ...
	I1119 21:49:30.403833  862935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:49:30.403899  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.421618  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.520066  862935 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 21:49:30.524821  862935 start.go:128] duration metric: took 14.289057311s to createHost
	I1119 21:49:30.524848  862935 start.go:83] releasing machines lock for "addons-441523", held for 14.289208936s
	I1119 21:49:30.524921  862935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-441523
	I1119 21:49:30.542065  862935 ssh_runner.go:195] Run: cat /version.json
	I1119 21:49:30.542117  862935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:49:30.542199  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.542124  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.565120  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.575014  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.749235  862935 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:30.755769  862935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:49:30.792594  862935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 21:49:30.796827  862935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:49:30.796897  862935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:49:30.824711  862935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 21:49:30.824733  862935 start.go:496] detecting cgroup driver to use...
	I1119 21:49:30.824765  862935 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 21:49:30.824815  862935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:49:30.841771  862935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:49:30.854490  862935 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:49:30.854557  862935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:49:30.873297  862935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:49:30.894328  862935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:49:31.017596  862935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:49:31.153846  862935 docker.go:234] disabling docker service ...
	I1119 21:49:31.153947  862935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:49:31.176473  862935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:49:31.191159  862935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:49:31.311081  862935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:49:31.439695  862935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:49:31.453385  862935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:49:31.468513  862935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:49:31.468633  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.477845  862935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 21:49:31.477957  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.487412  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.496806  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.505940  862935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:49:31.514944  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.524436  862935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.538491  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.547754  862935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:49:31.555675  862935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:49:31.563285  862935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:49:31.679281  862935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:49:31.854519  862935 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:49:31.854597  862935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:49:31.858327  862935 start.go:564] Will wait 60s for crictl version
	I1119 21:49:31.858384  862935 ssh_runner.go:195] Run: which crictl
	I1119 21:49:31.861799  862935 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 21:49:31.885654  862935 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 21:49:31.885828  862935 ssh_runner.go:195] Run: crio --version
	I1119 21:49:31.917697  862935 ssh_runner.go:195] Run: crio --version
	I1119 21:49:31.948924  862935 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 21:49:31.951512  862935 cli_runner.go:164] Run: docker network inspect addons-441523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:49:31.970112  862935 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 21:49:31.973988  862935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:49:31.983664  862935 kubeadm.go:884] updating cluster {Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:49:31.983791  862935 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:49:31.983852  862935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:49:32.021704  862935 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:49:32.021734  862935 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:49:32.021792  862935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:49:32.047029  862935 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:49:32.047055  862935 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:49:32.047063  862935 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 21:49:32.047169  862935 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-441523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:49:32.047260  862935 ssh_runner.go:195] Run: crio config
	I1119 21:49:32.099996  862935 cni.go:84] Creating CNI manager for ""
	I1119 21:49:32.100021  862935 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:49:32.100045  862935 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:49:32.100070  862935 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-441523 NodeName:addons-441523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:49:32.100198  862935 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-441523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:49:32.100279  862935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:49:32.108440  862935 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:49:32.108508  862935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:49:32.116718  862935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 21:49:32.129632  862935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:49:32.142740  862935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1119 21:49:32.155666  862935 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 21:49:32.159218  862935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:49:32.168719  862935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:49:32.282858  862935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:49:32.297974  862935 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523 for IP: 192.168.49.2
	I1119 21:49:32.297997  862935 certs.go:195] generating shared ca certs ...
	I1119 21:49:32.298012  862935 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.298188  862935 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 21:49:32.816911  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt ...
	I1119 21:49:32.816945  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt: {Name:mkf1d98d4e371ceb601e565d414bc633ade7a72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.817842  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key ...
	I1119 21:49:32.817871  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key: {Name:mk592e686c52cc1b9a8e48e3cbd0b8215de1fe61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.818042  862935 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 21:49:32.971558  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt ...
	I1119 21:49:32.971587  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt: {Name:mkd8d269b824ee7e8a1dfa7afa9dcf5651378848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.972408  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key ...
	I1119 21:49:32.972424  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key: {Name:mkf1e3ae1e2bec2690d49e7a1ab5c1df3f001005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.972509  862935 certs.go:257] generating profile certs ...
	I1119 21:49:32.972582  862935 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.key
	I1119 21:49:32.972602  862935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt with IP's: []
	I1119 21:49:33.150033  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt ...
	I1119 21:49:33.150066  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: {Name:mkf27ca42c6172695431b1f1ec36368c0c0e561e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:33.151296  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.key ...
	I1119 21:49:33.151313  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.key: {Name:mke8cd1292dace8bab04bed1f3cd3ab58f4af8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:33.151415  862935 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8
	I1119 21:49:33.151439  862935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 21:49:34.361139  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8 ...
	I1119 21:49:34.361172  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8: {Name:mk9140eb20c383f802d1d6b9c0b92851e6b30be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:34.361356  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8 ...
	I1119 21:49:34.361375  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8: {Name:mk55d8e694670224883581104513f3e8439eeabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:34.361457  862935 certs.go:382] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8 -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt
	I1119 21:49:34.361535  862935 certs.go:386] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8 -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key
	I1119 21:49:34.361588  862935 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key
	I1119 21:49:34.361608  862935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt with IP's: []
	I1119 21:49:35.020592  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt ...
	I1119 21:49:35.020627  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt: {Name:mk39ef9d435d3dd27e6848746daffd14262b99a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:35.020811  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key ...
	I1119 21:49:35.020825  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key: {Name:mk1deaf58fe1aea64276e2e9480d53503e7e3197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:35.021990  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 21:49:35.022038  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 21:49:35.022063  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:49:35.022089  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 21:49:35.022673  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:49:35.042487  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 21:49:35.061711  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:49:35.079640  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 21:49:35.098019  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 21:49:35.117012  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:49:35.134522  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:49:35.152821  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 21:49:35.170994  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:49:35.188647  862935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:49:35.201649  862935 ssh_runner.go:195] Run: openssl version
	I1119 21:49:35.207989  862935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:49:35.216654  862935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:49:35.220482  862935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:49:35.220550  862935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:49:35.261449  862935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:49:35.269943  862935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:49:35.274613  862935 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 21:49:35.274663  862935 kubeadm.go:401] StartCluster: {Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:49:35.274752  862935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:35.274811  862935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:35.312135  862935 cri.go:89] found id: ""
	I1119 21:49:35.312206  862935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 21:49:35.321903  862935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 21:49:35.330512  862935 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 21:49:35.330580  862935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 21:49:35.342414  862935 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 21:49:35.342438  862935 kubeadm.go:158] found existing configuration files:
	
	I1119 21:49:35.342488  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 21:49:35.349768  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 21:49:35.349837  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 21:49:35.357059  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 21:49:35.364749  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 21:49:35.364814  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 21:49:35.372047  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 21:49:35.379632  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 21:49:35.379724  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 21:49:35.387038  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 21:49:35.394554  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 21:49:35.394638  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 21:49:35.402111  862935 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 21:49:35.444194  862935 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 21:49:35.444517  862935 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 21:49:35.467080  862935 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 21:49:35.467163  862935 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 21:49:35.467206  862935 kubeadm.go:319] OS: Linux
	I1119 21:49:35.467258  862935 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 21:49:35.467312  862935 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 21:49:35.467366  862935 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 21:49:35.467419  862935 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 21:49:35.467473  862935 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 21:49:35.467527  862935 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 21:49:35.467579  862935 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 21:49:35.467636  862935 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 21:49:35.467687  862935 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 21:49:35.539746  862935 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 21:49:35.540014  862935 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 21:49:35.540162  862935 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 21:49:35.548873  862935 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 21:49:35.553165  862935 out.go:252]   - Generating certificates and keys ...
	I1119 21:49:35.553340  862935 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 21:49:35.553461  862935 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 21:49:35.760476  862935 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 21:49:36.926098  862935 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 21:49:38.091503  862935 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 21:49:38.245240  862935 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 21:49:38.550400  862935 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 21:49:38.550570  862935 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-441523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:49:39.625769  862935 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 21:49:39.625922  862935 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-441523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:49:39.878595  862935 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 21:49:40.148816  862935 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 21:49:40.359048  862935 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 21:49:40.359136  862935 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 21:49:40.692131  862935 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 21:49:41.180317  862935 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 21:49:41.457315  862935 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 21:49:41.996427  862935 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 21:49:42.427859  862935 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 21:49:42.428676  862935 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 21:49:42.431526  862935 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 21:49:42.435137  862935 out.go:252]   - Booting up control plane ...
	I1119 21:49:42.435277  862935 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 21:49:42.435374  862935 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 21:49:42.435465  862935 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 21:49:42.452542  862935 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 21:49:42.452889  862935 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 21:49:42.461379  862935 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 21:49:42.461797  862935 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 21:49:42.461851  862935 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 21:49:42.607061  862935 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 21:49:42.607186  862935 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 21:49:43.604492  862935 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001039744s
	I1119 21:49:43.608315  862935 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 21:49:43.608423  862935 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 21:49:43.608816  862935 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 21:49:43.608933  862935 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 21:49:46.851283  862935 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.242632147s
	I1119 21:49:47.914615  862935 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.306310549s
	I1119 21:49:49.610641  862935 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002267046s
	I1119 21:49:49.630485  862935 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 21:49:49.642754  862935 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 21:49:49.657039  862935 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 21:49:49.657298  862935 kubeadm.go:319] [mark-control-plane] Marking the node addons-441523 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 21:49:49.668671  862935 kubeadm.go:319] [bootstrap-token] Using token: gdp8rj.h80e66euj21i98yv
	I1119 21:49:49.671801  862935 out.go:252]   - Configuring RBAC rules ...
	I1119 21:49:49.671947  862935 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 21:49:49.678009  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 21:49:49.690358  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 21:49:49.694696  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 21:49:49.699052  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 21:49:49.703214  862935 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 21:49:50.018305  862935 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 21:49:50.449645  862935 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 21:49:51.017368  862935 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 21:49:51.018666  862935 kubeadm.go:319] 
	I1119 21:49:51.018746  862935 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 21:49:51.018752  862935 kubeadm.go:319] 
	I1119 21:49:51.018833  862935 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 21:49:51.018838  862935 kubeadm.go:319] 
	I1119 21:49:51.018887  862935 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 21:49:51.018951  862935 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 21:49:51.019003  862935 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 21:49:51.019007  862935 kubeadm.go:319] 
	I1119 21:49:51.019064  862935 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 21:49:51.019069  862935 kubeadm.go:319] 
	I1119 21:49:51.019118  862935 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 21:49:51.019123  862935 kubeadm.go:319] 
	I1119 21:49:51.019177  862935 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 21:49:51.019255  862935 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 21:49:51.019326  862935 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 21:49:51.019331  862935 kubeadm.go:319] 
	I1119 21:49:51.019419  862935 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 21:49:51.019499  862935 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 21:49:51.019503  862935 kubeadm.go:319] 
	I1119 21:49:51.019590  862935 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gdp8rj.h80e66euj21i98yv \
	I1119 21:49:51.019697  862935 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 21:49:51.019719  862935 kubeadm.go:319] 	--control-plane 
	I1119 21:49:51.019722  862935 kubeadm.go:319] 
	I1119 21:49:51.019811  862935 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 21:49:51.019816  862935 kubeadm.go:319] 
	I1119 21:49:51.019901  862935 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gdp8rj.h80e66euj21i98yv \
	I1119 21:49:51.020061  862935 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 21:49:51.022652  862935 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 21:49:51.022913  862935 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 21:49:51.023056  862935 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 21:49:51.023083  862935 cni.go:84] Creating CNI manager for ""
	I1119 21:49:51.023092  862935 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:49:51.026388  862935 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 21:49:51.029307  862935 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 21:49:51.033742  862935 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 21:49:51.033762  862935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 21:49:51.049599  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 21:49:51.337813  862935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 21:49:51.337893  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:51.337951  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-441523 minikube.k8s.io/updated_at=2025_11_19T21_49_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=addons-441523 minikube.k8s.io/primary=true
	I1119 21:49:51.354939  862935 ops.go:34] apiserver oom_adj: -16
	I1119 21:49:51.531830  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:52.031970  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:52.532847  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:53.032622  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:53.532683  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:54.031945  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:54.532437  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:55.032000  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:55.532006  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:56.032058  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:56.152067  862935 kubeadm.go:1114] duration metric: took 4.814246952s to wait for elevateKubeSystemPrivileges
	I1119 21:49:56.152098  862935 kubeadm.go:403] duration metric: took 20.877438665s to StartCluster
	I1119 21:49:56.152117  862935 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:56.152261  862935 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 21:49:56.152769  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:56.152980  862935 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:49:56.153122  862935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 21:49:56.153396  862935 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:56.153436  862935 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 21:49:56.153537  862935 addons.go:70] Setting yakd=true in profile "addons-441523"
	I1119 21:49:56.153556  862935 addons.go:239] Setting addon yakd=true in "addons-441523"
	I1119 21:49:56.153586  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.154105  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.154614  862935 addons.go:70] Setting metrics-server=true in profile "addons-441523"
	I1119 21:49:56.154639  862935 addons.go:239] Setting addon metrics-server=true in "addons-441523"
	I1119 21:49:56.154663  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.155112  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.155241  862935 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-441523"
	I1119 21:49:56.155261  862935 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-441523"
	I1119 21:49:56.155281  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.155711  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158023  862935 addons.go:70] Setting registry=true in profile "addons-441523"
	I1119 21:49:56.158055  862935 addons.go:239] Setting addon registry=true in "addons-441523"
	I1119 21:49:56.158187  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158431  862935 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-441523"
	I1119 21:49:56.158513  862935 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-441523"
	I1119 21:49:56.158643  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158964  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.159313  862935 addons.go:70] Setting default-storageclass=true in profile "addons-441523"
	I1119 21:49:56.159348  862935 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-441523"
	I1119 21:49:56.159617  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158343  862935 addons.go:70] Setting registry-creds=true in profile "addons-441523"
	I1119 21:49:56.160607  862935 addons.go:239] Setting addon registry-creds=true in "addons-441523"
	I1119 21:49:56.160662  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.161248  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.170146  862935 addons.go:70] Setting gcp-auth=true in profile "addons-441523"
	I1119 21:49:56.170180  862935 mustload.go:66] Loading cluster: addons-441523
	I1119 21:49:56.170386  862935 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:56.170639  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158356  862935 addons.go:70] Setting storage-provisioner=true in profile "addons-441523"
	I1119 21:49:56.175556  862935 addons.go:239] Setting addon storage-provisioner=true in "addons-441523"
	I1119 21:49:56.175597  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.176076  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.182477  862935 addons.go:70] Setting ingress=true in profile "addons-441523"
	I1119 21:49:56.182508  862935 addons.go:239] Setting addon ingress=true in "addons-441523"
	I1119 21:49:56.182559  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.183158  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158363  862935 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-441523"
	I1119 21:49:56.186390  862935 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-441523"
	I1119 21:49:56.186750  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.198940  862935 addons.go:70] Setting ingress-dns=true in profile "addons-441523"
	I1119 21:49:56.198973  862935 addons.go:239] Setting addon ingress-dns=true in "addons-441523"
	I1119 21:49:56.199014  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158369  862935 addons.go:70] Setting volcano=true in profile "addons-441523"
	I1119 21:49:56.199493  862935 addons.go:239] Setting addon volcano=true in "addons-441523"
	I1119 21:49:56.199519  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.199930  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.200222  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158394  862935 addons.go:70] Setting volumesnapshots=true in profile "addons-441523"
	I1119 21:49:56.208504  862935 addons.go:239] Setting addon volumesnapshots=true in "addons-441523"
	I1119 21:49:56.208549  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.209021  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.209171  862935 addons.go:70] Setting inspektor-gadget=true in profile "addons-441523"
	I1119 21:49:56.209189  862935 addons.go:239] Setting addon inspektor-gadget=true in "addons-441523"
	I1119 21:49:56.209209  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.209602  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158413  862935 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-441523"
	I1119 21:49:56.211942  862935 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-441523"
	I1119 21:49:56.211989  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.212479  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158420  862935 addons.go:70] Setting cloud-spanner=true in profile "addons-441523"
	I1119 21:49:56.299279  862935 addons.go:239] Setting addon cloud-spanner=true in "addons-441523"
	I1119 21:49:56.299360  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.299957  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.301987  862935 addons.go:239] Setting addon default-storageclass=true in "addons-441523"
	I1119 21:49:56.302132  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158428  862935 out.go:179] * Verifying Kubernetes components...
	I1119 21:49:56.320471  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.325279  862935 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 21:49:56.334941  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 21:49:56.335010  862935 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 21:49:56.335119  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.335310  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.363012  862935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:49:56.363345  862935 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 21:49:56.367099  862935 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:49:56.367123  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 21:49:56.367219  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.377455  862935 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 21:49:56.378512  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.387111  862935 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 21:49:56.389881  862935 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 21:49:56.390000  862935 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:49:56.390011  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 21:49:56.390085  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.390175  862935 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 21:49:56.383187  862935 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 21:49:56.393067  862935 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:49:56.393086  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 21:49:56.393151  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	W1119 21:49:56.383338  862935 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 21:49:56.383480  862935 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:49:56.399915  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 21:49:56.400000  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.432137  862935 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 21:49:56.435033  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 21:49:56.435057  862935 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 21:49:56.435127  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.436042  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 21:49:56.436221  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 21:49:56.436451  862935 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 21:49:56.436471  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 21:49:56.436540  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.334946  862935 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 21:49:56.450146  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.453595  862935 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:49:56.477728  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 21:49:56.477805  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.495421  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 21:49:56.459289  862935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 21:49:56.464681  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 21:49:56.498376  862935 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 21:49:56.498461  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.475365  862935 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 21:49:56.517629  862935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 21:49:56.517712  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.541597  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 21:49:56.541862  862935 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 21:49:56.551022  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.551522  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:49:56.553434  862935 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 21:49:56.553463  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 21:49:56.553531  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.559839  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 21:49:56.561118  862935 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-441523"
	I1119 21:49:56.561204  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.561707  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.578308  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:49:56.581331  862935 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:49:56.581355  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 21:49:56.581421  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.600035  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.603381  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.607061  862935 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 21:49:56.607112  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 21:49:56.609543  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.611501  862935 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:49:56.611518  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 21:49:56.611578  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.611992  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.615969  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 21:49:56.623047  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 21:49:56.627524  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 21:49:56.633511  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 21:49:56.636413  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 21:49:56.636438  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 21:49:56.636504  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.653783  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.682476  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.703042  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.718614  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.727905  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.745047  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.752749  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.758183  862935 out.go:179]   - Using image docker.io/busybox:stable
	I1119 21:49:56.761242  862935 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 21:49:56.764728  862935 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:49:56.764754  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 21:49:56.764816  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.780646  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	W1119 21:49:56.783051  862935 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 21:49:56.783081  862935 retry.go:31] will retry after 198.173201ms: ssh: handshake failed: EOF
	I1119 21:49:56.799097  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.944954  862935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:49:57.312019  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:49:57.339206  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:49:57.390576  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 21:49:57.390601  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 21:49:57.431625  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 21:49:57.435538  862935 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 21:49:57.435567  862935 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 21:49:57.440544  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:49:57.460793  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:49:57.504714  862935 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 21:49:57.504742  862935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 21:49:57.515045  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 21:49:57.515073  862935 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 21:49:57.570845  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:49:57.573070  862935 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:49:57.573102  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 21:49:57.608019  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:49:57.613338  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 21:49:57.613366  862935 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 21:49:57.638648  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:49:57.638676  862935 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 21:49:57.676324  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 21:49:57.690004  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:49:57.757214  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 21:49:57.757250  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 21:49:57.769885  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:49:57.777661  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:49:57.844156  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 21:49:57.844204  862935 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 21:49:57.860258  862935 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 21:49:57.860287  862935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 21:49:57.881428  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:49:58.020698  862935 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 21:49:58.020769  862935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 21:49:58.022022  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 21:49:58.022083  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 21:49:58.067019  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 21:49:58.067086  862935 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 21:49:58.204547  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:49:58.204617  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 21:49:58.209478  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 21:49:58.209544  862935 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 21:49:58.214587  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 21:49:58.214657  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 21:49:58.382706  862935 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:49:58.382774  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 21:49:58.385524  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:49:58.389882  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 21:49:58.389950  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 21:49:58.393062  862935 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.897302196s)
	I1119 21:49:58.393179  862935 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 21:49:58.393138  862935 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.44812185s)
	I1119 21:49:58.394022  862935 node_ready.go:35] waiting up to 6m0s for node "addons-441523" to be "Ready" ...
	I1119 21:49:58.630237  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 21:49:58.630303  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 21:49:58.669002  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:49:58.805435  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 21:49:58.805462  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 21:49:58.901675  862935 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-441523" context rescaled to 1 replicas
	I1119 21:49:59.076224  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 21:49:59.076294  862935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 21:49:59.307796  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 21:49:59.307859  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 21:49:59.410590  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 21:49:59.410661  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 21:49:59.572766  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:49:59.572833  862935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 21:49:59.733754  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1119 21:50:00.411106  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:00.956029  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.643970145s)
	I1119 21:50:01.563873  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.123284226s)
	I1119 21:50:01.563945  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.103091174s)
	I1119 21:50:01.564008  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.132179348s)
	I1119 21:50:01.564285  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.225049408s)
	I1119 21:50:01.609577  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.038662224s)
	I1119 21:50:01.609789  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.001737033s)
	I1119 21:50:01.609867  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.93351055s)
	I1119 21:50:02.367900  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.590211141s)
	I1119 21:50:02.367932  862935 addons.go:480] Verifying addon registry=true in "addons-441523"
	I1119 21:50:02.367860  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.59793769s)
	I1119 21:50:02.368204  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.486731786s)
	I1119 21:50:02.368218  862935 addons.go:480] Verifying addon metrics-server=true in "addons-441523"
	I1119 21:50:02.368257  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.982673996s)
	I1119 21:50:02.368327  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.678298521s)
	I1119 21:50:02.368353  862935 addons.go:480] Verifying addon ingress=true in "addons-441523"
	I1119 21:50:02.368613  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.699525411s)
	W1119 21:50:02.368646  862935 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:50:02.368664  862935 retry.go:31] will retry after 180.609841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:50:02.371187  862935 out.go:179] * Verifying registry addon...
	I1119 21:50:02.373241  862935 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-441523 service yakd-dashboard -n yakd-dashboard
	
	I1119 21:50:02.373283  862935 out.go:179] * Verifying ingress addon...
	I1119 21:50:02.376099  862935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 21:50:02.378074  862935 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 21:50:02.387273  862935 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:50:02.387301  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:02.387498  862935 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 21:50:02.387517  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:02.549976  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:50:02.821075  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.087201536s)
	I1119 21:50:02.821112  862935 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-441523"
	I1119 21:50:02.824208  862935 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 21:50:02.828071  862935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 21:50:02.840866  862935 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:50:02.840904  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 21:50:02.902116  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:02.942677  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:02.943398  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:03.331397  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:03.380466  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:03.385005  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:03.832139  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:03.879025  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:03.881118  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:03.987885  862935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 21:50:03.987972  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:50:04.009333  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:50:04.115773  862935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 21:50:04.128834  862935 addons.go:239] Setting addon gcp-auth=true in "addons-441523"
	I1119 21:50:04.128882  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:50:04.129353  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:50:04.147129  862935 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 21:50:04.147186  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:50:04.164491  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:50:04.266002  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:50:04.268886  862935 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 21:50:04.271629  862935 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 21:50:04.271658  862935 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 21:50:04.284991  862935 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 21:50:04.285013  862935 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 21:50:04.299015  862935 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:50:04.299039  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 21:50:04.311419  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:50:04.331683  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:04.379592  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:04.382824  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:04.825562  862935 addons.go:480] Verifying addon gcp-auth=true in "addons-441523"
	I1119 21:50:04.828679  862935 out.go:179] * Verifying gcp-auth addon...
	I1119 21:50:04.832525  862935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 21:50:04.838824  862935 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 21:50:04.838850  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:04.839019  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:04.879181  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:04.881746  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:05.331730  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:05.336314  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:05.379164  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:05.381127  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:05.397100  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:05.831593  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:05.836039  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:05.879987  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:05.881355  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:06.332034  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:06.335452  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:06.379274  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:06.381403  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:06.831615  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:06.835467  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:06.880353  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:06.881833  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:07.331233  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:07.335789  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:07.380215  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:07.381955  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:07.831564  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:07.835472  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:07.879497  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:07.882444  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:07.898637  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:08.331885  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:08.335194  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:08.379988  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:08.381031  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:08.832571  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:08.834974  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:08.879879  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:08.881602  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:09.332289  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:09.335861  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:09.380555  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:09.382052  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:09.831763  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:09.835340  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:09.879947  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:09.882270  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:09.911144  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:10.331172  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:10.336191  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:10.379087  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:10.381038  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:10.831259  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:10.835864  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:10.879502  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:10.881651  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:11.332206  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:11.335966  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:11.379501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:11.381511  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:11.832044  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:11.835672  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:11.879426  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:11.881437  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:12.331556  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:12.336108  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:12.379822  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:12.380786  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:12.397390  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:12.831632  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:12.835303  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:12.879293  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:12.881593  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:13.331524  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:13.335078  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:13.379690  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:13.381073  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:13.831623  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:13.836194  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:13.879713  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:13.881728  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:14.331150  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:14.335991  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:14.378753  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:14.381027  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:14.831321  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:14.836252  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:14.878811  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:14.880887  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:14.899318  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:15.331568  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:15.335969  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:15.379933  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:15.380637  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:15.831738  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:15.835395  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:15.878965  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:15.881162  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:16.331458  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:16.335843  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:16.379413  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:16.381616  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:16.831413  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:16.836168  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:16.880075  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:16.881347  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:16.899865  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:17.330940  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:17.336343  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:17.379444  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:17.381775  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:17.832238  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:17.835787  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:17.880993  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:17.881108  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:18.332462  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:18.336073  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:18.379769  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:18.380942  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:18.831912  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:18.835694  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:18.880098  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:18.882741  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:19.330928  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:19.335472  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:19.379348  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:19.381547  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:19.397407  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:19.831813  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:19.835937  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:19.879795  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:19.881218  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:20.331114  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:20.335656  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:20.379422  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:20.381587  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:20.831590  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:20.835089  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:20.878920  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:20.880998  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:21.331343  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:21.335909  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:21.379779  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:21.380805  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:21.397686  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:21.832264  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:21.835824  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:21.879972  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:21.881536  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:22.331315  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:22.335887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:22.380072  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:22.381554  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:22.831259  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:22.835923  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:22.879924  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:22.881201  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:23.330970  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:23.335460  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:23.379391  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:23.381601  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:23.832181  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:23.835849  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:23.879436  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:23.881868  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:23.899063  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:24.331164  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:24.336142  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:24.380751  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:24.381252  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:24.831343  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:24.836240  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:24.879145  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:24.881597  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:25.330830  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:25.335356  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:25.378854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:25.381222  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:25.831304  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:25.836300  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:25.880167  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:25.881446  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:26.331258  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:26.335932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:26.380732  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:26.381469  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1119 21:50:26.397386  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:26.831698  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:26.835509  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:26.879459  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:26.883298  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:27.331778  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:27.335534  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:27.380007  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:27.381828  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:27.831662  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:27.835219  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:27.880004  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:27.881457  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:28.331932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:28.338977  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:28.379762  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:28.382006  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:28.397626  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:28.831986  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:28.835640  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:28.879444  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:28.882204  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:29.334525  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:29.336490  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:29.379457  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:29.381694  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:29.831592  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:29.835155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:29.879109  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:29.881126  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:30.335286  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:30.336657  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:30.379460  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:30.381868  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:30.397859  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:30.831444  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:30.836109  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:30.880336  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:30.881216  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:31.332113  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:31.335492  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:31.380288  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:31.381801  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:31.831612  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:31.836303  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:31.879010  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:31.881600  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:32.331309  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:32.336065  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:32.379950  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:32.381378  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:32.831074  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:32.835792  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:32.879906  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:32.881194  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:32.898764  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:33.331854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:33.335588  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:33.379252  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:33.381461  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:33.831397  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:33.836234  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:33.878825  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:33.881000  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:34.331854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:34.335563  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:34.379253  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:34.381391  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:34.831131  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:34.835540  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:34.879170  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:34.881130  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:35.331536  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:35.335384  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:35.379054  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:35.381276  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:35.397066  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:35.830763  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:35.835606  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:35.879104  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:35.881358  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:36.331327  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:36.336051  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:36.380119  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:36.381726  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:36.831838  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:36.835991  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:36.879495  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:36.881692  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:37.358171  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:37.362152  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:37.417157  862935 node_ready.go:49] node "addons-441523" is "Ready"
	I1119 21:50:37.417189  862935 node_ready.go:38] duration metric: took 39.023121117s for node "addons-441523" to be "Ready" ...
	I1119 21:50:37.417203  862935 api_server.go:52] waiting for apiserver process to appear ...
	I1119 21:50:37.417277  862935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:50:37.441851  862935 api_server.go:72] duration metric: took 41.288833011s to wait for apiserver process to appear ...
	I1119 21:50:37.441881  862935 api_server.go:88] waiting for apiserver healthz status ...
	I1119 21:50:37.441902  862935 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 21:50:37.443787  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:37.492870  862935 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 21:50:37.502705  862935 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:50:37.502733  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:37.522337  862935 api_server.go:141] control plane version: v1.34.1
	I1119 21:50:37.522374  862935 api_server.go:131] duration metric: took 80.483579ms to wait for apiserver health ...
	I1119 21:50:37.522395  862935 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 21:50:37.574647  862935 system_pods.go:59] 18 kube-system pods found
	I1119 21:50:37.574684  862935 system_pods.go:61] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending
	I1119 21:50:37.574692  862935 system_pods.go:61] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending
	I1119 21:50:37.574707  862935 system_pods.go:61] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending
	I1119 21:50:37.574712  862935 system_pods.go:61] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:37.574717  862935 system_pods.go:61] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:37.574721  862935 system_pods.go:61] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:37.574725  862935 system_pods.go:61] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:37.574731  862935 system_pods.go:61] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending
	I1119 21:50:37.574736  862935 system_pods.go:61] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:37.574741  862935 system_pods.go:61] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:37.574745  862935 system_pods.go:61] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending
	I1119 21:50:37.574758  862935 system_pods.go:61] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:37.574763  862935 system_pods.go:61] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending
	I1119 21:50:37.574767  862935 system_pods.go:61] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending
	I1119 21:50:37.574805  862935 system_pods.go:61] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:37.574813  862935 system_pods.go:61] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending
	I1119 21:50:37.574818  862935 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending
	I1119 21:50:37.574822  862935 system_pods.go:61] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending
	I1119 21:50:37.574827  862935 system_pods.go:74] duration metric: took 52.425217ms to wait for pod list to return data ...
	I1119 21:50:37.574839  862935 default_sa.go:34] waiting for default service account to be created ...
	I1119 21:50:37.653063  862935 default_sa.go:45] found service account: "default"
	I1119 21:50:37.653091  862935 default_sa.go:55] duration metric: took 78.245632ms for default service account to be created ...
	I1119 21:50:37.653102  862935 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 21:50:37.753125  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:37.753169  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:37.753176  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending
	I1119 21:50:37.753182  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending
	I1119 21:50:37.753186  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending
	I1119 21:50:37.753190  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:37.753195  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:37.753199  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:37.753203  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:37.753211  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending
	I1119 21:50:37.753214  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:37.753218  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:37.753241  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending
	I1119 21:50:37.753246  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:37.753250  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending
	I1119 21:50:37.753260  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending
	I1119 21:50:37.753264  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:37.753267  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending
	I1119 21:50:37.753271  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending
	I1119 21:50:37.753275  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending
	I1119 21:50:37.753292  862935 retry.go:31] will retry after 220.650746ms: missing components: kube-dns
	I1119 21:50:37.854711  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:37.855189  862935 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:50:37.855210  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:37.905716  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:37.909085  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:37.981117  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:37.981157  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:37.981165  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending
	I1119 21:50:37.981171  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending
	I1119 21:50:37.981176  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending
	I1119 21:50:37.981188  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:37.981197  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:37.981202  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:37.981213  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:37.981220  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:37.981225  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:37.981237  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:37.981241  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending
	I1119 21:50:37.981245  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:37.981251  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:37.981255  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending
	I1119 21:50:37.981265  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:37.981270  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending
	I1119 21:50:37.981274  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending
	I1119 21:50:37.981282  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:50:37.981304  862935 retry.go:31] will retry after 273.024117ms: missing components: kube-dns
	I1119 21:50:38.258492  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:38.258530  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:38.258539  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:38.258547  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:38.258559  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending
	I1119 21:50:38.258569  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:38.258575  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:38.258586  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:38.258591  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:38.258598  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:38.258607  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:38.258611  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:38.258618  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:38.258626  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:38.258640  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:38.258646  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:38.258652  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:38.258659  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.258671  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.258679  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:50:38.258698  862935 retry.go:31] will retry after 342.667594ms: missing components: kube-dns
	I1119 21:50:38.340307  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:38.340410  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:38.443751  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:38.444105  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:38.609068  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:38.609103  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:38.609112  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:38.609119  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:38.609135  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:38.609147  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:38.609160  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:38.609166  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:38.609170  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:38.609183  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:38.609187  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:38.609192  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:38.609210  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:38.609226  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:38.609232  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:38.609239  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:38.609250  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:38.609257  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.609268  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.609274  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:50:38.609300  862935 retry.go:31] will retry after 378.765863ms: missing components: kube-dns
	I1119 21:50:38.837575  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:38.838753  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:38.880809  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:38.883989  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:38.995572  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:38.995617  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:38.995626  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:38.995635  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:38.995643  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:38.995648  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:38.995653  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:38.995658  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:38.995673  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:38.995685  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:38.995689  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:38.995694  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:38.995707  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:38.995714  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:38.995722  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:38.995733  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:38.995746  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:38.995753  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.995770  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.995781  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Running
	I1119 21:50:38.995796  862935 retry.go:31] will retry after 717.350866ms: missing components: kube-dns
	I1119 21:50:39.331955  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:39.335854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:39.380697  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:39.383536  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:39.720226  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:39.720261  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:39.720278  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:39.720293  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:39.720301  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:39.720316  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:39.720326  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:39.720331  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:39.720337  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:39.720348  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:39.720352  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:39.720357  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:39.720371  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:39.720383  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:39.720390  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:39.720398  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:39.720406  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:39.720412  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:39.720423  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:39.720427  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Running
	I1119 21:50:39.720449  862935 retry.go:31] will retry after 946.683909ms: missing components: kube-dns
	I1119 21:50:39.832973  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:39.836337  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:39.933950  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:39.934404  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:40.331932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:40.335862  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:40.379970  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:40.381676  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:40.672502  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:40.672591  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Running
	I1119 21:50:40.672617  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:40.672666  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:40.672697  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:40.672724  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:40.672757  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:40.672781  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:40.672807  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:40.672850  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:40.672877  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:40.672903  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:40.672938  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:40.672965  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:40.672992  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:40.673026  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:40.673050  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:40.673083  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:40.673118  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:40.673141  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Running
	I1119 21:50:40.673167  862935 system_pods.go:126] duration metric: took 3.020058043s to wait for k8s-apps to be running ...
	I1119 21:50:40.673202  862935 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 21:50:40.673296  862935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:50:40.688500  862935 system_svc.go:56] duration metric: took 15.288635ms WaitForService to wait for kubelet
	I1119 21:50:40.688632  862935 kubeadm.go:587] duration metric: took 44.535616869s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:50:40.688671  862935 node_conditions.go:102] verifying NodePressure condition ...
	I1119 21:50:40.691759  862935 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 21:50:40.691838  862935 node_conditions.go:123] node cpu capacity is 2
	I1119 21:50:40.691878  862935 node_conditions.go:105] duration metric: took 3.170439ms to run NodePressure ...
	I1119 21:50:40.691921  862935 start.go:242] waiting for startup goroutines ...
	I1119 21:50:40.832968  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:40.835710  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:40.880018  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:40.882810  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:41.336313  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:41.432420  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:41.433052  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:41.433249  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:41.835611  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:41.835698  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:41.879670  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:41.881544  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:42.332567  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:42.335455  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:42.381662  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:42.383443  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:42.832117  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:42.835155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:42.880167  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:42.882312  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:43.332396  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:43.336497  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:43.381212  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:43.384016  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:43.832231  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:43.836525  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:43.881510  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:43.884284  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:44.332166  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:44.335608  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:44.379434  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:44.381831  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:44.832073  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:44.835833  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:44.881676  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:44.881811  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:45.335243  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:45.338415  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:45.435392  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:45.435714  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:45.831663  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:45.836242  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:45.879733  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:45.882433  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:46.332502  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:46.336251  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:46.381147  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:46.382919  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:46.832687  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:46.835778  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:46.882928  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:46.883519  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:47.338551  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:47.434840  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:47.434989  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:47.435106  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:47.831742  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:47.836195  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:47.879821  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:47.881912  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:48.333127  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:48.336027  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:48.381457  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:48.383145  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:48.831859  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:48.835891  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:48.880349  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:48.881051  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:49.332738  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:49.335611  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:49.380327  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:49.383397  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:49.833555  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:49.836512  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:49.880226  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:49.883389  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:50.332207  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:50.335978  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:50.382038  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:50.382680  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:50.832125  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:50.836475  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:50.880915  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:50.883147  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:51.331968  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:51.336067  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:51.379233  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:51.381867  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:51.832266  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:51.835782  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:51.881028  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:51.882291  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:52.332976  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:52.335691  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:52.381445  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:52.382460  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:52.832806  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:52.835285  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:52.880764  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:52.882588  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:53.332146  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:53.335729  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:53.379505  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:53.381563  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:53.832164  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:53.835177  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:53.880598  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:53.881841  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:54.331794  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:54.336026  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:54.382812  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:54.383216  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:54.832495  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:54.836486  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:54.880718  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:54.882028  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:55.331581  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:55.335359  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:55.380566  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:55.382399  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:55.831598  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:55.835458  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:55.883990  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:55.885204  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:56.333238  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:56.336483  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:56.380551  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:56.384548  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:56.838659  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:56.842293  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:56.940055  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:56.940570  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:57.331849  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:57.344619  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:57.379682  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:57.381851  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:57.837210  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:57.837309  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:57.879733  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:57.882358  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:58.332412  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:58.336268  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:58.380985  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:58.382998  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:58.842939  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:58.843176  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:58.882563  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:58.882988  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:59.332642  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:59.335829  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:59.382183  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:59.383401  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:59.831850  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:59.835464  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:59.879958  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:59.883061  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:00.335487  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:00.354041  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:00.384063  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:00.385420  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:00.833346  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:00.836186  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:00.879480  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:00.882398  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:01.332369  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:01.336885  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:01.382481  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:01.382719  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:01.832990  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:01.835566  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:01.882524  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:01.883320  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:02.332250  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:02.336626  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:02.382136  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:02.382584  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:02.832844  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:02.835382  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:02.881897  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:02.884159  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:03.332228  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:03.336017  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:03.379521  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:03.382391  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:03.832758  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:03.835520  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:03.880486  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:03.883648  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:04.331379  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:04.335929  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:04.383686  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:04.384182  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:04.833020  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:04.835880  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:04.885552  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:04.886426  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:05.336709  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:05.341781  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:05.384625  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:05.385549  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:05.840119  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:05.840533  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:05.880390  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:05.886908  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:06.332666  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:06.335467  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:06.381351  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:06.383629  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:06.833171  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:06.835510  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:06.883496  862935 kapi.go:107] duration metric: took 1m4.507394384s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 21:51:06.888505  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:07.332274  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:07.335876  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:07.381770  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:07.832501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:07.835806  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:07.902623  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:08.332700  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:08.335218  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:08.381435  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:08.831887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:08.835395  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:08.881836  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:09.332768  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:09.335335  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:09.381824  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:09.831796  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:09.835205  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:09.881841  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:10.338887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:10.339441  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:10.433212  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:10.832499  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:10.836598  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:10.882349  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:11.332675  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:11.335248  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:11.381739  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:11.832731  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:11.838295  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:11.881718  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:12.331582  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:12.335125  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:12.382914  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:12.832411  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:12.836430  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:12.881833  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:13.331347  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:13.336574  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:13.382091  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:13.839846  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:13.840554  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:13.882242  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:14.331957  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:14.335781  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:14.381921  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:14.831659  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:14.835472  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:14.882266  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:15.331803  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:15.335172  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:15.381688  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:15.832007  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:15.835509  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:15.882142  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:16.332286  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:16.335817  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:16.382355  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:16.834501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:16.837289  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:16.933573  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:17.337411  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:17.342145  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:17.382495  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:17.834308  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:17.836383  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:17.882664  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:18.337342  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:18.337766  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:18.436937  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:18.831871  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:18.835961  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:18.884312  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:19.336373  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:19.336494  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:19.385643  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:19.833108  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:19.837186  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:19.885038  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:20.334358  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:20.337341  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:20.388453  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:20.836507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:20.837113  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:20.883800  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:21.344655  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:21.345147  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:21.381552  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:21.832932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:21.835500  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:21.881690  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:22.332309  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:22.336187  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:22.381646  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:22.832342  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:22.836388  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:22.933902  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:23.332349  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:23.336554  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:23.381641  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:23.831735  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:23.836352  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:23.882287  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:24.333803  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:24.335844  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:24.382943  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:24.831131  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:24.835511  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:24.885848  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:25.332097  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:25.335579  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:25.383630  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:25.832981  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:25.835065  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:25.882007  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:26.336175  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:26.336792  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:26.382627  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:26.832259  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:26.835886  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:26.882656  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:27.333099  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:27.334975  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:27.380854  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:27.832699  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:27.835129  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:27.881755  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:28.331572  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:28.335189  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:28.381550  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:28.832596  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:28.835496  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:28.881914  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:29.331193  862935 kapi.go:107] duration metric: took 1m26.503121936s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 21:51:29.336632  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:29.382011  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:29.836507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:29.881480  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:30.335752  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:30.381943  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:30.835816  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:30.881778  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:31.336349  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:31.381277  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:31.835592  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:31.881617  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:32.335923  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:32.381960  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:32.836518  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:32.882418  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:33.336029  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:33.382087  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:33.835948  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:33.882282  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:34.335501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:34.381757  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:34.836059  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:34.881760  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:35.336473  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:35.381546  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:35.836337  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:35.881317  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:36.335507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:36.381585  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:36.835873  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:36.881996  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:37.336716  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:37.382068  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:37.835590  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:37.881921  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:38.336310  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:38.381484  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:38.835155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:38.881063  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:39.335672  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:39.382014  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:39.837377  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:39.881592  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:40.336445  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:40.381305  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:40.835637  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:40.883212  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:41.335185  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:41.381736  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:41.836480  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:41.881775  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:42.337460  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:42.382498  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:42.835621  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:42.881763  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:43.336062  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:43.381430  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:43.835507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:43.881550  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:44.335637  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:44.381737  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:44.836054  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:44.881785  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:45.336522  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:45.382122  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:45.836541  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:45.881651  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:46.335898  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:46.382079  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:46.835672  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:46.883881  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:47.335887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:47.382573  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:47.836559  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:47.882358  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:48.336155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:48.382237  862935 kapi.go:107] duration metric: took 1m46.004160603s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 21:51:48.835696  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:49.336278  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:49.836325  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:50.336494  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:50.836138  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:51.336734  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:51.836622  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:52.336179  862935 kapi.go:107] duration metric: took 1m47.503652652s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 21:51:52.339275  862935 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-441523 cluster.
	I1119 21:51:52.342090  862935 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 21:51:52.344995  862935 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 21:51:52.347879  862935 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, registry-creds, storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1119 21:51:52.350655  862935 addons.go:515] duration metric: took 1m56.197195738s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin registry-creds storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1119 21:51:52.350753  862935 start.go:247] waiting for cluster config update ...
	I1119 21:51:52.350789  862935 start.go:256] writing updated cluster config ...
	I1119 21:51:52.351166  862935 ssh_runner.go:195] Run: rm -f paused
	I1119 21:51:52.355889  862935 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:51:52.359650  862935 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dcqc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.364723  862935 pod_ready.go:94] pod "coredns-66bc5c9577-dcqc5" is "Ready"
	I1119 21:51:52.364750  862935 pod_ready.go:86] duration metric: took 5.07169ms for pod "coredns-66bc5c9577-dcqc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.367012  862935 pod_ready.go:83] waiting for pod "etcd-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.371421  862935 pod_ready.go:94] pod "etcd-addons-441523" is "Ready"
	I1119 21:51:52.371449  862935 pod_ready.go:86] duration metric: took 4.410717ms for pod "etcd-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.373486  862935 pod_ready.go:83] waiting for pod "kube-apiserver-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.377912  862935 pod_ready.go:94] pod "kube-apiserver-addons-441523" is "Ready"
	I1119 21:51:52.377939  862935 pod_ready.go:86] duration metric: took 4.419562ms for pod "kube-apiserver-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.380513  862935 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.760276  862935 pod_ready.go:94] pod "kube-controller-manager-addons-441523" is "Ready"
	I1119 21:51:52.760310  862935 pod_ready.go:86] duration metric: took 379.772736ms for pod "kube-controller-manager-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.960794  862935 pod_ready.go:83] waiting for pod "kube-proxy-v4ctw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.360249  862935 pod_ready.go:94] pod "kube-proxy-v4ctw" is "Ready"
	I1119 21:51:53.360278  862935 pod_ready.go:86] duration metric: took 399.451346ms for pod "kube-proxy-v4ctw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.560818  862935 pod_ready.go:83] waiting for pod "kube-scheduler-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.960784  862935 pod_ready.go:94] pod "kube-scheduler-addons-441523" is "Ready"
	I1119 21:51:53.960815  862935 pod_ready.go:86] duration metric: took 399.971312ms for pod "kube-scheduler-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.960829  862935 pod_ready.go:40] duration metric: took 1.604908866s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:51:54.029547  862935 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 21:51:54.032598  862935 out.go:179] * Done! kubectl is now configured to use "addons-441523" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 21:54:49 addons-441523 crio[829]: time="2025-11-19T21:54:49.95717549Z" level=info msg="Removed container 9325dfac6f0066fad9cdcfe3cd90a4904fe28c2b68e48cc8aca360c195ce6771: kube-system/registry-creds-764b6fb674-7msrk/registry-creds" id=985aaa66-cdff-44ea-aacb-716661e233e8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.623708857Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-h9z8v/POD" id=100a6d22-e6d3-4775-9098-9826ac6f4477 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.623781818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.644108718Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-h9z8v Namespace:default ID:dfa9f3d552706919a5a3240d910db0f112ace7f3969d22549122b7ba143b6c4b UID:8626d8f4-2baf-46c7-9f7e-04d2ef497396 NetNS:/var/run/netns/f3f61478-82cc-47a4-ad94-0cff0f8220cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079c50}] Aliases:map[]}"
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.644156546Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-h9z8v to CNI network \"kindnet\" (type=ptp)"
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.675188515Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-h9z8v Namespace:default ID:dfa9f3d552706919a5a3240d910db0f112ace7f3969d22549122b7ba143b6c4b UID:8626d8f4-2baf-46c7-9f7e-04d2ef497396 NetNS:/var/run/netns/f3f61478-82cc-47a4-ad94-0cff0f8220cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079c50}] Aliases:map[]}"
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.675537204Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-h9z8v for CNI network kindnet (type=ptp)"
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.690231451Z" level=info msg="Ran pod sandbox dfa9f3d552706919a5a3240d910db0f112ace7f3969d22549122b7ba143b6c4b with infra container: default/hello-world-app-5d498dc89-h9z8v/POD" id=100a6d22-e6d3-4775-9098-9826ac6f4477 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.692414166Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e300bc3c-431e-4e29-9964-ee2dd80a0928 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.692574685Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e300bc3c-431e-4e29-9964-ee2dd80a0928 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.692633927Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=e300bc3c-431e-4e29-9964-ee2dd80a0928 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.69517393Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ec014256-24d3-4c8d-ad69-177d4b5a94c1 name=/runtime.v1.ImageService/PullImage
	Nov 19 21:55:02 addons-441523 crio[829]: time="2025-11-19T21:55:02.696666945Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.329139431Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=ec014256-24d3-4c8d-ad69-177d4b5a94c1 name=/runtime.v1.ImageService/PullImage
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.329898751Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0e03bbb7-6e81-4c3d-9f0c-f2b88c2205d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.332376838Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=924132c2-d66f-42af-896d-e8b2fecbe5be name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.339193596Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-h9z8v/hello-world-app" id=9b16f4a9-3c4e-4038-a5f6-1068f91406a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.339516504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.34744768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.347803598Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/75ef3cffe8e0dfa04e1f989e6c34cbf505cf4329d08e0f752983f4daef20a2af/merged/etc/passwd: no such file or directory"
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.347911218Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/75ef3cffe8e0dfa04e1f989e6c34cbf505cf4329d08e0f752983f4daef20a2af/merged/etc/group: no such file or directory"
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.348230344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.367813858Z" level=info msg="Created container 7a9901d9eb1819790dd26d8e37dbf7b66fb0cc8f3b453df1244415eb8ed8f9e2: default/hello-world-app-5d498dc89-h9z8v/hello-world-app" id=9b16f4a9-3c4e-4038-a5f6-1068f91406a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.369195627Z" level=info msg="Starting container: 7a9901d9eb1819790dd26d8e37dbf7b66fb0cc8f3b453df1244415eb8ed8f9e2" id=c1a47c48-a60b-4399-b92d-3b872f590b92 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:55:03 addons-441523 crio[829]: time="2025-11-19T21:55:03.37290856Z" level=info msg="Started container" PID=7469 containerID=7a9901d9eb1819790dd26d8e37dbf7b66fb0cc8f3b453df1244415eb8ed8f9e2 description=default/hello-world-app-5d498dc89-h9z8v/hello-world-app id=c1a47c48-a60b-4399-b92d-3b872f590b92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfa9f3d552706919a5a3240d910db0f112ace7f3969d22549122b7ba143b6c4b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	7a9901d9eb181       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   dfa9f3d552706       hello-world-app-5d498dc89-h9z8v            default
	92e8d4ba1dc16       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             15 seconds ago           Exited              registry-creds                           1                   87a363966a091       registry-creds-764b6fb674-7msrk            kube-system
	20d6c3de1a08a       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   76e577596cdf2       nginx                                      default
	2154fca68aaca       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   add2bbae3efa8       busybox                                    default
	ef10317742de6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   54709cc8b1ee4       gcp-auth-78565c9fb4-sckk8                  gcp-auth
	34574a37d491b       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   38d78b0a38947       ingress-nginx-controller-6c8bf45fb-rv9b4   ingress-nginx
	96f30c790da8c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                   kube-system
	9da6964451ec2       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   ab6343e6098ad       ingress-nginx-admission-patch-tc7l2        ingress-nginx
	263912064df3e       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                   kube-system
	73c4790ba1baf       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                   kube-system
	f01ebeeec44c8       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                   kube-system
	6f1fc06239abc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   e235fd1afb89d       gadget-d99sd                               gadget
	c4eac1059aec2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                   kube-system
	9b5b4ec60deae       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   9244debc7b674       metrics-server-85b7d694d7-sph2x            kube-system
	30b145c697f23       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   04c39abccbef5       yakd-dashboard-5ff678cb9-c98f8             yakd-dashboard
	edc8c67432b98       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   27dd06aab8cdf       snapshot-controller-7d9fbc56b8-p69nx       kube-system
	f30d47c42b19c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   775abe9cde1b9       ingress-nginx-admission-create-4d2m6       ingress-nginx
	8f25e4db79cca       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   727c9c34479cc       csi-hostpath-resizer-0                     kube-system
	28bb9ca16548a       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   0657145fbd4a2       registry-6b586f9694-nmljk                  kube-system
	0cf11b3427234       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   23d58878a712d       kube-ingress-dns-minikube                  kube-system
	ce4788277f9a6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              4 minutes ago            Running             registry-proxy                           0                   89f6ba4ef53f1       registry-proxy-9279r                       kube-system
	23061303ad569       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   5837757f52d8d       local-path-provisioner-648f6765c9-9z5cl    local-path-storage
	6abf291cbc69c       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   f037a5d3ccc6b       cloud-spanner-emulator-6f9fcf858b-mk92d    default
	46c0e17f82719       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                   kube-system
	820452bcc27f8       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     4 minutes ago            Running             nvidia-device-plugin-ctr                 0                   5916b052c08aa       nvidia-device-plugin-daemonset-7k2x9       kube-system
	de9a0b0f37cb6       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   fffc59c5a0310       csi-hostpath-attacher-0                    kube-system
	f8301b586f555       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   23197bfa36a96       snapshot-controller-7d9fbc56b8-tvq5m       kube-system
	66d8b85866603       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   460c463796b65       coredns-66bc5c9577-dcqc5                   kube-system
	55d6ec9aa9d53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   f58f66f298351       storage-provisioner                        kube-system
	b69600b273a1e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   5ebebe0689ec6       kube-proxy-v4ctw                           kube-system
	f0b1f859006b1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   cd3df3b4720c3       kindnet-kz24p                              kube-system
	29fa20fcf4b84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   8bc9812b3b6f2       etcd-addons-441523                         kube-system
	c8ee152b70c2c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   c7f68659f681b       kube-scheduler-addons-441523               kube-system
	d6958a88d2715       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   48101ac26f782       kube-apiserver-addons-441523               kube-system
	8e7ca5d3f3c7d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   a6dd97d74f14a       kube-controller-manager-addons-441523      kube-system
	
	
	==> coredns [66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d] <==
	[INFO] 10.244.0.5:46511 - 57470 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00154773s
	[INFO] 10.244.0.5:46511 - 58694 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000131571s
	[INFO] 10.244.0.5:46511 - 40228 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000080518s
	[INFO] 10.244.0.5:35917 - 12832 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018118s
	[INFO] 10.244.0.5:35917 - 12594 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000197993s
	[INFO] 10.244.0.5:52613 - 40814 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085375s
	[INFO] 10.244.0.5:52613 - 40625 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000069047s
	[INFO] 10.244.0.5:45581 - 62665 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008421s
	[INFO] 10.244.0.5:45581 - 62460 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068612s
	[INFO] 10.244.0.5:44242 - 30871 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001666205s
	[INFO] 10.244.0.5:44242 - 30699 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001646579s
	[INFO] 10.244.0.5:45510 - 52359 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114266s
	[INFO] 10.244.0.5:45510 - 52212 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014538s
	[INFO] 10.244.0.21:54506 - 12885 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170603s
	[INFO] 10.244.0.21:57111 - 36597 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252992s
	[INFO] 10.244.0.21:48495 - 35035 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143198s
	[INFO] 10.244.0.21:52899 - 29726 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000311823s
	[INFO] 10.244.0.21:44520 - 63407 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165647s
	[INFO] 10.244.0.21:35058 - 24855 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152396s
	[INFO] 10.244.0.21:43106 - 22714 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002461177s
	[INFO] 10.244.0.21:57894 - 62563 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002123442s
	[INFO] 10.244.0.21:34480 - 15293 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001969528s
	[INFO] 10.244.0.21:38829 - 61195 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003362974s
	[INFO] 10.244.0.23:53274 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158517s
	[INFO] 10.244.0.23:57406 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147595s
	
	
	==> describe nodes <==
	Name:               addons-441523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-441523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=addons-441523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T21_49_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-441523
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-441523"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 21:49:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-441523
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 21:54:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 21:54:56 +0000   Wed, 19 Nov 2025 21:49:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 21:54:56 +0000   Wed, 19 Nov 2025 21:49:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 21:54:56 +0000   Wed, 19 Nov 2025 21:49:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 21:54:56 +0000   Wed, 19 Nov 2025 21:50:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-441523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                7e0289d3-2b72-41ab-9b05-c5cdea4768cd
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     cloud-spanner-emulator-6f9fcf858b-mk92d     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  default                     hello-world-app-5d498dc89-h9z8v             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-d99sd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  gcp-auth                    gcp-auth-78565c9fb4-sckk8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-rv9b4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m2s
	  kube-system                 coredns-66bc5c9577-dcqc5                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m8s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 csi-hostpathplugin-k94bt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 etcd-addons-441523                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m14s
	  kube-system                 kindnet-kz24p                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m9s
	  kube-system                 kube-apiserver-addons-441523                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-controller-manager-addons-441523       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-v4ctw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-scheduler-addons-441523                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 metrics-server-85b7d694d7-sph2x             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m4s
	  kube-system                 nvidia-device-plugin-daemonset-7k2x9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 registry-6b586f9694-nmljk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 registry-creds-764b6fb674-7msrk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 registry-proxy-9279r                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 snapshot-controller-7d9fbc56b8-p69nx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-tvq5m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  local-path-storage          local-path-provisioner-648f6765c9-9z5cl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-c98f8              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m6s                   kube-proxy       
	  Normal   Starting                 5m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node addons-441523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node addons-441523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m21s (x8 over 5m21s)  kubelet          Node addons-441523 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m14s                  kubelet          Node addons-441523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m14s                  kubelet          Node addons-441523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m14s                  kubelet          Node addons-441523 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m10s                  node-controller  Node addons-441523 event: Registered Node addons-441523 in Controller
	  Normal   NodeReady                4m27s                  kubelet          Node addons-441523 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov19 21:49] overlayfs: idmapped layers are currently not supported
	[  +0.079274] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40] <==
	{"level":"warn","ts":"2025-11-19T21:49:46.647597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.683378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.701152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.737243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.751707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.779804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.817791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.850010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.868007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.894198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.906496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.922795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.945217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.961446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.973266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.006602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.024296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.047393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.140534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:03.030990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:03.044982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.883158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.901294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.952161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.967117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55258","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ef10317742de6fb8a4ebf6e6ccbc71181b266fc7dc85d9c298129ebe3d52a1f9] <==
	2025/11/19 21:51:51 GCP Auth Webhook started!
	2025/11/19 21:51:54 Ready to marshal response ...
	2025/11/19 21:51:54 Ready to write response ...
	2025/11/19 21:51:54 Ready to marshal response ...
	2025/11/19 21:51:54 Ready to write response ...
	2025/11/19 21:51:54 Ready to marshal response ...
	2025/11/19 21:51:54 Ready to write response ...
	2025/11/19 21:52:15 Ready to marshal response ...
	2025/11/19 21:52:15 Ready to write response ...
	2025/11/19 21:52:17 Ready to marshal response ...
	2025/11/19 21:52:17 Ready to write response ...
	2025/11/19 21:52:17 Ready to marshal response ...
	2025/11/19 21:52:17 Ready to write response ...
	2025/11/19 21:52:25 Ready to marshal response ...
	2025/11/19 21:52:25 Ready to write response ...
	2025/11/19 21:52:42 Ready to marshal response ...
	2025/11/19 21:52:42 Ready to write response ...
	2025/11/19 21:52:45 Ready to marshal response ...
	2025/11/19 21:52:45 Ready to write response ...
	2025/11/19 21:53:04 Ready to marshal response ...
	2025/11/19 21:53:04 Ready to write response ...
	2025/11/19 21:55:02 Ready to marshal response ...
	2025/11/19 21:55:02 Ready to write response ...
	
	
	==> kernel <==
	 21:55:04 up  3:37,  0 user,  load average: 0.81, 1.33, 1.58
	Linux addons-441523 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97] <==
	I1119 21:52:57.251274       1 main.go:301] handling current node
	I1119 21:53:07.250931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:07.251062       1 main.go:301] handling current node
	I1119 21:53:17.250999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:17.251036       1 main.go:301] handling current node
	I1119 21:53:27.252776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:27.252821       1 main.go:301] handling current node
	I1119 21:53:37.254948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:37.254983       1 main.go:301] handling current node
	I1119 21:53:47.259334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:47.259444       1 main.go:301] handling current node
	I1119 21:53:57.250134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:57.250167       1 main.go:301] handling current node
	I1119 21:54:07.255003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:54:07.255115       1 main.go:301] handling current node
	I1119 21:54:17.258946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:54:17.259059       1 main.go:301] handling current node
	I1119 21:54:27.250134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:54:27.250482       1 main.go:301] handling current node
	I1119 21:54:37.257506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:54:37.257546       1 main.go:301] handling current node
	I1119 21:54:47.259092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:54:47.259126       1 main.go:301] handling current node
	I1119 21:54:57.250228       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:54:57.250261       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629] <==
	E1119 21:50:37.568192       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.10:443: connect: connection refused" logger="UnhandledError"
	W1119 21:51:02.505976       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:51:02.506021       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1119 21:51:02.506036       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1119 21:51:02.507190       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:51:02.507268       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1119 21:51:02.507279       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1119 21:51:29.052810       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:51:29.052881       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1119 21:51:29.053762       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.221.128:443: connect: connection refused" logger="UnhandledError"
	E1119 21:51:29.056830       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.221.128:443: connect: connection refused" logger="UnhandledError"
	I1119 21:51:29.171415       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 21:52:04.048103       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55982: use of closed network connection
	E1119 21:52:04.275809       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56008: use of closed network connection
	E1119 21:52:04.401119       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56026: use of closed network connection
	I1119 21:52:42.159952       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1119 21:52:42.470340       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.76.62"}
	I1119 21:52:56.116764       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1119 21:55:02.471037       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.247.162"}
	
	
	==> kube-controller-manager [8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487] <==
	I1119 21:49:54.881871       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 21:49:54.881890       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 21:49:54.897220       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:49:54.899399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 21:49:54.904546       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 21:49:54.914142       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 21:49:54.914284       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 21:49:54.914819       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 21:49:54.914839       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 21:49:54.915970       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 21:49:54.917898       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 21:49:54.918937       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1119 21:50:00.561283       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 21:50:24.874721       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:50:24.874911       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 21:50:24.875020       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 21:50:24.939845       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 21:50:24.944269       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 21:50:24.976020       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:50:25.045334       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 21:50:39.874716       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1119 21:50:54.983512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:50:55.074771       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1119 21:51:24.988816       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:51:25.084511       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea] <==
	I1119 21:49:57.071104       1 server_linux.go:53] "Using iptables proxy"
	I1119 21:49:57.176574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 21:49:57.277429       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 21:49:57.277457       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 21:49:57.277524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 21:49:57.354530       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 21:49:57.354578       1 server_linux.go:132] "Using iptables Proxier"
	I1119 21:49:57.366379       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 21:49:57.366675       1 server.go:527] "Version info" version="v1.34.1"
	I1119 21:49:57.366698       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:49:57.368072       1 config.go:200] "Starting service config controller"
	I1119 21:49:57.368090       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 21:49:57.368116       1 config.go:106] "Starting endpoint slice config controller"
	I1119 21:49:57.368120       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 21:49:57.368139       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 21:49:57.368143       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 21:49:57.368746       1 config.go:309] "Starting node config controller"
	I1119 21:49:57.368759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 21:49:57.368764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 21:49:57.468571       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 21:49:57.468644       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 21:49:57.468286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3] <==
	E1119 21:49:47.916692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:49:47.917626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 21:49:47.919063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 21:49:47.920119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 21:49:47.920398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:49:47.920511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:49:47.920601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 21:49:47.920689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:49:47.920798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 21:49:47.920888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:49:47.921025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:49:47.921144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:49:48.722387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 21:49:48.728010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 21:49:48.741854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:49:48.760009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:49:48.815158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:49:48.842419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 21:49:48.879659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:49:48.994025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:49:49.040861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:49:49.053452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:49:49.115738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:49:49.380225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1119 21:49:52.316176       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 21:53:13 addons-441523 kubelet[1279]: I1119 21:53:13.600980    1279 scope.go:117] "RemoveContainer" containerID="9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643"
	Nov 19 21:53:13 addons-441523 kubelet[1279]: E1119 21:53:13.602383    1279 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643\": container with ID starting with 9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643 not found: ID does not exist" containerID="9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643"
	Nov 19 21:53:13 addons-441523 kubelet[1279]: I1119 21:53:13.602576    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643"} err="failed to get container status \"9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643\": rpc error: code = NotFound desc = could not find container \"9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643\": container with ID starting with 9a90326c71e9e91051ea66791904eb595a1f1f80f34ecf2c84b9a5ab54988643 not found: ID does not exist"
	Nov 19 21:53:14 addons-441523 kubelet[1279]: I1119 21:53:14.409900    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a92a10-98d1-42cc-8a7f-4299d72ba2b4" path="/var/lib/kubelet/pods/86a92a10-98d1-42cc-8a7f-4299d72ba2b4/volumes"
	Nov 19 21:53:39 addons-441523 kubelet[1279]: I1119 21:53:39.406944    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-nmljk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:53:46 addons-441523 kubelet[1279]: I1119 21:53:46.407506    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9279r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:54:07 addons-441523 kubelet[1279]: I1119 21:54:07.406838    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7k2x9" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:54:47 addons-441523 kubelet[1279]: I1119 21:54:47.508499    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7msrk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:54:48 addons-441523 kubelet[1279]: I1119 21:54:48.406738    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-nmljk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:54:48 addons-441523 kubelet[1279]: I1119 21:54:48.934758    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7msrk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:54:48 addons-441523 kubelet[1279]: I1119 21:54:48.934815    1279 scope.go:117] "RemoveContainer" containerID="9325dfac6f0066fad9cdcfe3cd90a4904fe28c2b68e48cc8aca360c195ce6771"
	Nov 19 21:54:49 addons-441523 kubelet[1279]: I1119 21:54:49.940694    1279 scope.go:117] "RemoveContainer" containerID="9325dfac6f0066fad9cdcfe3cd90a4904fe28c2b68e48cc8aca360c195ce6771"
	Nov 19 21:54:49 addons-441523 kubelet[1279]: I1119 21:54:49.941044    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7msrk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:54:49 addons-441523 kubelet[1279]: I1119 21:54:49.941084    1279 scope.go:117] "RemoveContainer" containerID="92e8d4ba1dc16b7f498d01dd9fa1f6f6c6b0c2f523b56c90d18c428d828673cd"
	Nov 19 21:54:49 addons-441523 kubelet[1279]: E1119 21:54:49.941231    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7msrk_kube-system(0664d29c-371e-4498-9492-5bf78cd26131)\"" pod="kube-system/registry-creds-764b6fb674-7msrk" podUID="0664d29c-371e-4498-9492-5bf78cd26131"
	Nov 19 21:54:50 addons-441523 kubelet[1279]: E1119 21:54:50.525692    1279 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/579edad0fb347a2222dca26627271d865b80d9450e8180c44ee499ac3b201f39/diff" to get inode usage: stat /var/lib/containers/storage/overlay/579edad0fb347a2222dca26627271d865b80d9450e8180c44ee499ac3b201f39/diff: no such file or directory, extraDiskErr: <nil>
	Nov 19 21:54:50 addons-441523 kubelet[1279]: I1119 21:54:50.945387    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7msrk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:54:50 addons-441523 kubelet[1279]: I1119 21:54:50.945443    1279 scope.go:117] "RemoveContainer" containerID="92e8d4ba1dc16b7f498d01dd9fa1f6f6c6b0c2f523b56c90d18c428d828673cd"
	Nov 19 21:54:50 addons-441523 kubelet[1279]: E1119 21:54:50.945594    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7msrk_kube-system(0664d29c-371e-4498-9492-5bf78cd26131)\"" pod="kube-system/registry-creds-764b6fb674-7msrk" podUID="0664d29c-371e-4498-9492-5bf78cd26131"
	Nov 19 21:54:58 addons-441523 kubelet[1279]: I1119 21:54:58.409599    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9279r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:55:02 addons-441523 kubelet[1279]: I1119 21:55:02.459683    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8626d8f4-2baf-46c7-9f7e-04d2ef497396-gcp-creds\") pod \"hello-world-app-5d498dc89-h9z8v\" (UID: \"8626d8f4-2baf-46c7-9f7e-04d2ef497396\") " pod="default/hello-world-app-5d498dc89-h9z8v"
	Nov 19 21:55:02 addons-441523 kubelet[1279]: I1119 21:55:02.460223    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6nsh\" (UniqueName: \"kubernetes.io/projected/8626d8f4-2baf-46c7-9f7e-04d2ef497396-kube-api-access-f6nsh\") pod \"hello-world-app-5d498dc89-h9z8v\" (UID: \"8626d8f4-2baf-46c7-9f7e-04d2ef497396\") " pod="default/hello-world-app-5d498dc89-h9z8v"
	Nov 19 21:55:02 addons-441523 kubelet[1279]: W1119 21:55:02.686707    1279 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/crio-dfa9f3d552706919a5a3240d910db0f112ace7f3969d22549122b7ba143b6c4b WatchSource:0}: Error finding container dfa9f3d552706919a5a3240d910db0f112ace7f3969d22549122b7ba143b6c4b: Status 404 returned error can't find the container with id dfa9f3d552706919a5a3240d910db0f112ace7f3969d22549122b7ba143b6c4b
	Nov 19 21:55:04 addons-441523 kubelet[1279]: I1119 21:55:04.407307    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7msrk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:55:04 addons-441523 kubelet[1279]: I1119 21:55:04.407378    1279 scope.go:117] "RemoveContainer" containerID="92e8d4ba1dc16b7f498d01dd9fa1f6f6c6b0c2f523b56c90d18c428d828673cd"
	
	
	==> storage-provisioner [55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d] <==
	W1119 21:54:39.815654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:41.818831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:41.823549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:43.826436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:43.831174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:45.834590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:45.841777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:47.847234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:47.852605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:49.856322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:49.860773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:51.864639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:51.871644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:53.874586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:53.879310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:55.882091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:55.888941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:57.892575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:57.899580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:59.903796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:59.908268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:55:01.911324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:55:01.916602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:55:03.919865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:55:03.924927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-441523 -n addons-441523
helpers_test.go:269: (dbg) Run:  kubectl --context addons-441523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-441523 describe pod ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-441523 describe pod ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2: exit status 1 (102.194128ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4d2m6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tc7l2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-441523 describe pod ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (316.567921ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:55:05.615225  872803 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:05.619045  872803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:05.619064  872803 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:05.619070  872803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:05.619426  872803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:55:05.619765  872803 mustload.go:66] Loading cluster: addons-441523
	I1119 21:55:05.620200  872803 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:05.620227  872803 addons.go:607] checking whether the cluster is paused
	I1119 21:55:05.620339  872803 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:05.620356  872803 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:55:05.620840  872803 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:55:05.650057  872803 ssh_runner.go:195] Run: systemctl --version
	I1119 21:55:05.650123  872803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:55:05.670847  872803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:55:05.777567  872803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:55:05.777660  872803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:55:05.812576  872803 cri.go:89] found id: "45a53af38bc9733a63179fdcca1baece842ead9ebb8a25c3d7329836776687c4"
	I1119 21:55:05.812597  872803 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:55:05.812602  872803 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:55:05.812606  872803 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:55:05.812609  872803 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:55:05.812614  872803 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:55:05.812621  872803 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:55:05.812627  872803 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:55:05.812630  872803 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:55:05.812636  872803 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:55:05.812640  872803 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:55:05.812644  872803 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:55:05.812647  872803 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:55:05.812650  872803 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:55:05.812653  872803 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:55:05.812658  872803 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:55:05.812661  872803 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:55:05.812665  872803 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:55:05.812668  872803 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:55:05.812671  872803 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:55:05.812676  872803 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:55:05.812679  872803 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:55:05.812682  872803 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:55:05.812685  872803 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:55:05.812688  872803 cri.go:89] found id: ""
	I1119 21:55:05.812738  872803 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:55:05.833869  872803 out.go:203] 
	W1119 21:55:05.838397  872803 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:55:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:55:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:55:05.838426  872803 out.go:285] * 
	* 
	W1119 21:55:05.844944  872803 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:55:05.849074  872803 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable ingress --alsologtostderr -v=1: exit status 11 (315.947826ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:55:05.912763  872916 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:05.913588  872916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:05.913631  872916 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:05.913652  872916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:05.913970  872916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:55:05.914357  872916 mustload.go:66] Loading cluster: addons-441523
	I1119 21:55:05.914806  872916 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:05.914850  872916 addons.go:607] checking whether the cluster is paused
	I1119 21:55:05.915107  872916 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:05.915147  872916 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:55:05.915670  872916 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:55:05.944306  872916 ssh_runner.go:195] Run: systemctl --version
	I1119 21:55:05.944394  872916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:55:05.968373  872916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:55:06.075366  872916 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:55:06.075481  872916 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:55:06.129457  872916 cri.go:89] found id: "45a53af38bc9733a63179fdcca1baece842ead9ebb8a25c3d7329836776687c4"
	I1119 21:55:06.129479  872916 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:55:06.129485  872916 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:55:06.129488  872916 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:55:06.129492  872916 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:55:06.129495  872916 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:55:06.129498  872916 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:55:06.129508  872916 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:55:06.129511  872916 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:55:06.129518  872916 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:55:06.129521  872916 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:55:06.129525  872916 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:55:06.129528  872916 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:55:06.129532  872916 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:55:06.129535  872916 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:55:06.129546  872916 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:55:06.129557  872916 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:55:06.129566  872916 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:55:06.129570  872916 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:55:06.129573  872916 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:55:06.129578  872916 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:55:06.129581  872916 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:55:06.129583  872916 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:55:06.129586  872916 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:55:06.129589  872916 cri.go:89] found id: ""
	I1119 21:55:06.129639  872916 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:55:06.149474  872916 out.go:203] 
	W1119 21:55:06.152802  872916 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:55:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:55:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:55:06.152859  872916 out.go:285] * 
	* 
	W1119 21:55:06.159730  872916 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:55:06.165260  872916 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-d99sd" [00e6e9d3-c493-4b26-a7d8-2e8df10ab5ed] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003382152s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (276.191096ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:41.615968  870676 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:41.617422  870676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:41.617476  870676 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:41.617498  870676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:41.617797  870676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:41.618145  870676 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:41.618579  870676 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:41.618626  870676 addons.go:607] checking whether the cluster is paused
	I1119 21:52:41.618760  870676 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:41.618798  870676 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:41.619337  870676 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:41.638989  870676 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:41.639059  870676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:41.658630  870676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:41.761640  870676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:41.761769  870676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:41.811926  870676 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:41.811948  870676 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:41.811954  870676 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:41.811957  870676 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:41.811961  870676 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:41.811964  870676 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:41.811973  870676 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:41.811976  870676 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:41.811980  870676 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:41.811987  870676 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:41.811990  870676 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:41.811993  870676 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:41.811997  870676 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:41.812000  870676 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:41.812003  870676 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:41.812008  870676 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:41.812011  870676 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:41.812016  870676 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:41.812020  870676 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:41.812023  870676 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:41.812027  870676 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:41.812030  870676 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:41.812033  870676 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:41.812035  870676 cri.go:89] found id: ""
	I1119 21:52:41.812085  870676 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:41.827822  870676 out.go:203] 
	W1119 21:52:41.830727  870676 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:41.830756  870676 out.go:285] * 
	* 
	W1119 21:52:41.837222  870676 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:41.840252  870676 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.196138ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003381349s
addons_test.go:463: (dbg) Run:  kubectl --context addons-441523 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (257.936277ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:35.363531  870568 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:35.364265  870568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:35.364370  870568 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:35.364392  870568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:35.364734  870568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:35.365080  870568 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:35.365501  870568 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:35.365539  870568 addons.go:607] checking whether the cluster is paused
	I1119 21:52:35.365683  870568 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:35.365715  870568 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:35.366194  870568 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:35.384461  870568 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:35.384531  870568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:35.401538  870568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:35.501534  870568 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:35.501621  870568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:35.532169  870568 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:35.532202  870568 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:35.532208  870568 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:35.532212  870568 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:35.532215  870568 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:35.532219  870568 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:35.532231  870568 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:35.532235  870568 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:35.532238  870568 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:35.532244  870568 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:35.532251  870568 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:35.532254  870568 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:35.532258  870568 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:35.532262  870568 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:35.532265  870568 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:35.532269  870568 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:35.532276  870568 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:35.532280  870568 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:35.532283  870568 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:35.532285  870568 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:35.532290  870568 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:35.532293  870568 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:35.532296  870568 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:35.532299  870568 cri.go:89] found id: ""
	I1119 21:52:35.532349  870568 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:35.547605  870568 out.go:203] 
	W1119 21:52:35.550466  870568 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:35.550487  870568 out.go:285] * 
	* 
	W1119 21:52:35.556890  870568 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:35.559946  870568 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1119 21:52:26.446596  862175 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1119 21:52:26.449779  862175 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 21:52:26.449801  862175 kapi.go:107] duration metric: took 3.218044ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.228096ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-441523 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-441523 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [1a2843b0-7942-4c04-b15c-dc1cadfde3eb] Pending
helpers_test.go:352: "task-pv-pod" [1a2843b0-7942-4c04-b15c-dc1cadfde3eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [1a2843b0-7942-4c04-b15c-dc1cadfde3eb] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.006284082s
addons_test.go:572: (dbg) Run:  kubectl --context addons-441523 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-441523 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-441523 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-441523 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-441523 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-441523 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-441523 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [86a92a10-98d1-42cc-8a7f-4299d72ba2b4] Pending
helpers_test.go:352: "task-pv-pod-restore" [86a92a10-98d1-42cc-8a7f-4299d72ba2b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [86a92a10-98d1-42cc-8a7f-4299d72ba2b4] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.014906455s
addons_test.go:614: (dbg) Run:  kubectl --context addons-441523 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-441523 delete pod task-pv-pod-restore: (1.133298043s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-441523 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-441523 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (256.434889ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:53:14.056059  871605 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:53:14.057008  871605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:53:14.057058  871605 out.go:374] Setting ErrFile to fd 2...
	I1119 21:53:14.057085  871605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:53:14.057944  871605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:53:14.058377  871605 mustload.go:66] Loading cluster: addons-441523
	I1119 21:53:14.058827  871605 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:53:14.058851  871605 addons.go:607] checking whether the cluster is paused
	I1119 21:53:14.059084  871605 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:53:14.059110  871605 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:53:14.059605  871605 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:53:14.078088  871605 ssh_runner.go:195] Run: systemctl --version
	I1119 21:53:14.078155  871605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:53:14.097887  871605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:53:14.201351  871605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:53:14.201435  871605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:53:14.232060  871605 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:53:14.232083  871605 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:53:14.232089  871605 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:53:14.232092  871605 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:53:14.232096  871605 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:53:14.232107  871605 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:53:14.232111  871605 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:53:14.232114  871605 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:53:14.232118  871605 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:53:14.232123  871605 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:53:14.232131  871605 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:53:14.232134  871605 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:53:14.232138  871605 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:53:14.232141  871605 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:53:14.232144  871605 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:53:14.232155  871605 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:53:14.232158  871605 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:53:14.232163  871605 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:53:14.232166  871605 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:53:14.232169  871605 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:53:14.232173  871605 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:53:14.232187  871605 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:53:14.232190  871605 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:53:14.232193  871605 cri.go:89] found id: ""
	I1119 21:53:14.232252  871605 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:53:14.247218  871605 out.go:203] 
	W1119 21:53:14.250103  871605 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:53:14.250138  871605 out.go:285] * 
	* 
	W1119 21:53:14.256752  871605 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:53:14.259932  871605 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (271.990478ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:53:14.323273  871648 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:53:14.324251  871648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:53:14.324303  871648 out.go:374] Setting ErrFile to fd 2...
	I1119 21:53:14.324326  871648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:53:14.324688  871648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:53:14.325039  871648 mustload.go:66] Loading cluster: addons-441523
	I1119 21:53:14.325485  871648 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:53:14.325525  871648 addons.go:607] checking whether the cluster is paused
	I1119 21:53:14.326583  871648 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:53:14.326632  871648 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:53:14.327293  871648 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:53:14.345273  871648 ssh_runner.go:195] Run: systemctl --version
	I1119 21:53:14.345330  871648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:53:14.363172  871648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:53:14.466615  871648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:53:14.466775  871648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:53:14.500769  871648 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:53:14.500802  871648 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:53:14.500808  871648 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:53:14.500813  871648 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:53:14.500816  871648 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:53:14.500820  871648 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:53:14.500823  871648 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:53:14.500827  871648 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:53:14.500830  871648 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:53:14.500846  871648 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:53:14.500853  871648 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:53:14.500868  871648 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:53:14.500876  871648 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:53:14.500880  871648 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:53:14.500883  871648 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:53:14.500888  871648 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:53:14.500893  871648 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:53:14.500898  871648 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:53:14.500902  871648 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:53:14.500905  871648 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:53:14.500909  871648 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:53:14.500912  871648 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:53:14.500915  871648 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:53:14.500918  871648 cri.go:89] found id: ""
	I1119 21:53:14.500983  871648 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:53:14.516914  871648 out.go:203] 
	W1119 21:53:14.519873  871648 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:53:14.519902  871648 out.go:285] * 
	* 
	W1119 21:53:14.526292  871648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:53:14.529313  871648 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (48.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-441523 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-441523 --alsologtostderr -v=1: exit status 11 (320.761531ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:25.732511  869902 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:25.733282  869902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:25.733302  869902 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:25.733310  869902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:25.733626  869902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:25.733957  869902 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:25.734397  869902 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:25.734416  869902 addons.go:607] checking whether the cluster is paused
	I1119 21:52:25.734565  869902 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:25.734584  869902 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:25.735220  869902 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:25.753523  869902 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:25.753599  869902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:25.781072  869902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:25.882214  869902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:25.882304  869902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:25.914971  869902 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:25.914993  869902 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:25.915003  869902 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:25.915008  869902 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:25.915011  869902 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:25.915015  869902 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:25.915018  869902 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:25.915022  869902 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:25.915025  869902 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:25.915031  869902 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:25.915034  869902 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:25.915037  869902 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:25.915046  869902 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:25.915049  869902 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:25.915052  869902 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:25.915057  869902 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:25.915063  869902 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:25.915067  869902 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:25.915070  869902 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:25.915073  869902 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:25.915078  869902 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:25.915081  869902 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:25.915084  869902 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:25.915087  869902 cri.go:89] found id: ""
	I1119 21:52:25.915137  869902 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:25.939553  869902 out.go:203] 
	W1119 21:52:25.942451  869902 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:25.942474  869902 out.go:285] * 
	* 
	W1119 21:52:25.948846  869902 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:25.952014  869902 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-441523 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-441523
helpers_test.go:243: (dbg) docker inspect addons-441523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b",
	        "Created": "2025-11-19T21:49:25.051412864Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 863336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T21:49:25.114693035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/hostname",
	        "HostsPath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/hosts",
	        "LogPath": "/var/lib/docker/containers/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b/414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b-json.log",
	        "Name": "/addons-441523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-441523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-441523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "414e65357ea277f765e39aac117b1862895c65cacf934a5c66c9fc694287f84b",
	                "LowerDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46f6b1791e1040436d444e896c0fcc76da272283b44528b7d8b3d683c0fac803/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-441523",
	                "Source": "/var/lib/docker/volumes/addons-441523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-441523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-441523",
	                "name.minikube.sigs.k8s.io": "addons-441523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ac53c5a3a02539119c122f79df4f7398442c00691fe5476b146461c5c6d24b2",
	            "SandboxKey": "/var/run/docker/netns/8ac53c5a3a02",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33565"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33563"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-441523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:25:51:bb:81:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9beb4116a0e41a7239b474a3998dd7108ccdf0e70a60f65f784a1ef2cc908173",
	                    "EndpointID": "ec7cdf6358e781da5f35f453674382391e27f3fa779fce6ed06a1065361e36c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-441523",
	                        "414e65357ea2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-441523 -n addons-441523
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-441523 logs -n 25: (1.588458255s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-914845 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-914845   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ delete  │ -p download-only-914845                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-914845   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ start   │ -o=json --download-only -p download-only-667855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-667855   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ delete  │ -p download-only-667855                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-667855   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ delete  │ -p download-only-914845                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-914845   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ delete  │ -p download-only-667855                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-667855   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ start   │ --download-only -p download-docker-739940 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-739940 │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ delete  │ -p download-docker-739940                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-739940 │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ start   │ --download-only -p binary-mirror-835231 --alsologtostderr --binary-mirror http://127.0.0.1:43589 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-835231   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ delete  │ -p binary-mirror-835231                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-835231   │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ addons  │ disable dashboard -p addons-441523                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ addons  │ enable dashboard -p addons-441523                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	│ start   │ -p addons-441523 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:51 UTC │
	│ addons  │ addons-441523 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:51 UTC │                     │
	│ addons  │ addons-441523 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ ip      │ addons-441523 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │ 19 Nov 25 21:52 UTC │
	│ addons  │ addons-441523 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ ssh     │ addons-441523 ssh cat /opt/local-path-provisioner/pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │ 19 Nov 25 21:52 UTC │
	│ addons  │ addons-441523 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ enable headlamp -p addons-441523 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	│ addons  │ addons-441523 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-441523          │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:48:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:48:57.861702  862935 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:48:57.861845  862935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:57.861858  862935 out.go:374] Setting ErrFile to fd 2...
	I1119 21:48:57.861864  862935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:57.862137  862935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:48:57.862626  862935 out.go:368] Setting JSON to false
	I1119 21:48:57.863476  862935 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12667,"bootTime":1763576271,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 21:48:57.863543  862935 start.go:143] virtualization:  
	I1119 21:48:57.866957  862935 out.go:179] * [addons-441523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 21:48:57.870695  862935 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:48:57.870808  862935 notify.go:221] Checking for updates...
	I1119 21:48:57.876480  862935 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:48:57.879327  862935 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 21:48:57.882325  862935 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 21:48:57.885192  862935 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 21:48:57.888043  862935 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:48:57.891208  862935 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:48:57.920387  862935 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:48:57.920501  862935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:57.984176  862935 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:57.974953313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:57.984286  862935 docker.go:319] overlay module found
	I1119 21:48:57.987302  862935 out.go:179] * Using the docker driver based on user configuration
	I1119 21:48:57.990130  862935 start.go:309] selected driver: docker
	I1119 21:48:57.990145  862935 start.go:930] validating driver "docker" against <nil>
	I1119 21:48:57.990159  862935 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:48:57.990909  862935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:58.047623  862935 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:58.037583559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:58.047830  862935 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:48:58.048082  862935 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:48:58.051058  862935 out.go:179] * Using Docker driver with root privileges
	I1119 21:48:58.054005  862935 cni.go:84] Creating CNI manager for ""
	I1119 21:48:58.054088  862935 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:48:58.054098  862935 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:48:58.054193  862935 start.go:353] cluster config:
	{Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 21:48:58.059308  862935 out.go:179] * Starting "addons-441523" primary control-plane node in "addons-441523" cluster
	I1119 21:48:58.062186  862935 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:48:58.065101  862935 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:48:58.068060  862935 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:48:58.068111  862935 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 21:48:58.068122  862935 cache.go:65] Caching tarball of preloaded images
	I1119 21:48:58.068148  862935 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:48:58.068218  862935 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 21:48:58.068229  862935 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:48:58.068586  862935 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/config.json ...
	I1119 21:48:58.068621  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/config.json: {Name:mka57ff5f1b920d0aacbdf5cf225326ead9b2215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:48:58.084183  862935 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:48:58.084308  862935 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:48:58.084335  862935 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory, skipping pull
	I1119 21:48:58.084340  862935 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in cache, skipping pull
	I1119 21:48:58.084348  862935 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	I1119 21:48:58.084362  862935 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from local cache
	I1119 21:49:16.234792  862935 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from cached tarball
	I1119 21:49:16.234834  862935 cache.go:243] Successfully downloaded all kic artifacts
	I1119 21:49:16.234888  862935 start.go:360] acquireMachinesLock for addons-441523: {Name:mk3d2e259db7e5fa8383aeccf2ef969557fd328e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:49:16.235619  862935 start.go:364] duration metric: took 702.615µs to acquireMachinesLock for "addons-441523"
	I1119 21:49:16.235662  862935 start.go:93] Provisioning new machine with config: &{Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:49:16.235748  862935 start.go:125] createHost starting for "" (driver="docker")
	I1119 21:49:16.239348  862935 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 21:49:16.239592  862935 start.go:159] libmachine.API.Create for "addons-441523" (driver="docker")
	I1119 21:49:16.239638  862935 client.go:173] LocalClient.Create starting
	I1119 21:49:16.239748  862935 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 21:49:17.650457  862935 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 21:49:18.099507  862935 cli_runner.go:164] Run: docker network inspect addons-441523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 21:49:18.114284  862935 cli_runner.go:211] docker network inspect addons-441523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 21:49:18.114367  862935 network_create.go:284] running [docker network inspect addons-441523] to gather additional debugging logs...
	I1119 21:49:18.114390  862935 cli_runner.go:164] Run: docker network inspect addons-441523
	W1119 21:49:18.128920  862935 cli_runner.go:211] docker network inspect addons-441523 returned with exit code 1
	I1119 21:49:18.128956  862935 network_create.go:287] error running [docker network inspect addons-441523]: docker network inspect addons-441523: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-441523 not found
	I1119 21:49:18.128970  862935 network_create.go:289] output of [docker network inspect addons-441523]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-441523 not found
	
	** /stderr **
	I1119 21:49:18.129100  862935 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:49:18.145581  862935 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001979a80}
	I1119 21:49:18.145627  862935 network_create.go:124] attempt to create docker network addons-441523 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 21:49:18.145681  862935 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-441523 addons-441523
	I1119 21:49:18.200697  862935 network_create.go:108] docker network addons-441523 192.168.49.0/24 created
	I1119 21:49:18.200731  862935 kic.go:121] calculated static IP "192.168.49.2" for the "addons-441523" container
	I1119 21:49:18.200818  862935 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 21:49:18.216370  862935 cli_runner.go:164] Run: docker volume create addons-441523 --label name.minikube.sigs.k8s.io=addons-441523 --label created_by.minikube.sigs.k8s.io=true
	I1119 21:49:18.234203  862935 oci.go:103] Successfully created a docker volume addons-441523
	I1119 21:49:18.234296  862935 cli_runner.go:164] Run: docker run --rm --name addons-441523-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-441523 --entrypoint /usr/bin/test -v addons-441523:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 21:49:20.541170  862935 cli_runner.go:217] Completed: docker run --rm --name addons-441523-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-441523 --entrypoint /usr/bin/test -v addons-441523:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib: (2.306834446s)
	I1119 21:49:20.541206  862935 oci.go:107] Successfully prepared a docker volume addons-441523
	I1119 21:49:20.541262  862935 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:49:20.541272  862935 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 21:49:20.541335  862935 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-441523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 21:49:24.982056  862935 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-441523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.440681919s)
	I1119 21:49:24.982090  862935 kic.go:203] duration metric: took 4.440814319s to extract preloaded images to volume ...
	W1119 21:49:24.982228  862935 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 21:49:24.982338  862935 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 21:49:25.036233  862935 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-441523 --name addons-441523 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-441523 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-441523 --network addons-441523 --ip 192.168.49.2 --volume addons-441523:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 21:49:25.323805  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Running}}
	I1119 21:49:25.343086  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:25.366328  862935 cli_runner.go:164] Run: docker exec addons-441523 stat /var/lib/dpkg/alternatives/iptables
	I1119 21:49:25.423160  862935 oci.go:144] the created container "addons-441523" has a running status.
	I1119 21:49:25.423186  862935 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa...
	I1119 21:49:25.789346  862935 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 21:49:25.817209  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:25.837088  862935 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 21:49:25.837115  862935 kic_runner.go:114] Args: [docker exec --privileged addons-441523 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 21:49:25.877210  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:25.895233  862935 machine.go:94] provisionDockerMachine start ...
	I1119 21:49:25.895346  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:25.912624  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:25.912971  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:25.912988  862935 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:49:25.913637  862935 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 21:49:29.058400  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-441523
	
	I1119 21:49:29.058422  862935 ubuntu.go:182] provisioning hostname "addons-441523"
	I1119 21:49:29.058487  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.076452  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:29.076770  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:29.076785  862935 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-441523 && echo "addons-441523" | sudo tee /etc/hostname
	I1119 21:49:29.227827  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-441523
	
	I1119 21:49:29.227916  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.245648  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:29.245955  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:29.245977  862935 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-441523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-441523/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-441523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:49:29.387170  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:49:29.387195  862935 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 21:49:29.387225  862935 ubuntu.go:190] setting up certificates
	I1119 21:49:29.387246  862935 provision.go:84] configureAuth start
	I1119 21:49:29.387325  862935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-441523
	I1119 21:49:29.405092  862935 provision.go:143] copyHostCerts
	I1119 21:49:29.405181  862935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 21:49:29.405313  862935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 21:49:29.405393  862935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 21:49:29.405452  862935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.addons-441523 san=[127.0.0.1 192.168.49.2 addons-441523 localhost minikube]
	I1119 21:49:29.736048  862935 provision.go:177] copyRemoteCerts
	I1119 21:49:29.736124  862935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:49:29.736166  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.752732  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:29.854512  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 21:49:29.872289  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:49:29.889782  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 21:49:29.907827  862935 provision.go:87] duration metric: took 520.551136ms to configureAuth
	I1119 21:49:29.907862  862935 ubuntu.go:206] setting minikube options for container-runtime
	I1119 21:49:29.908055  862935 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:29.908165  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:29.925891  862935 main.go:143] libmachine: Using SSH client type: native
	I1119 21:49:29.926210  862935 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33561 <nil> <nil>}
	I1119 21:49:29.926230  862935 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:49:30.263038  862935 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:49:30.263064  862935 machine.go:97] duration metric: took 4.367807217s to provisionDockerMachine
	I1119 21:49:30.263089  862935 client.go:176] duration metric: took 14.02342532s to LocalClient.Create
	I1119 21:49:30.263102  862935 start.go:167] duration metric: took 14.023512616s to libmachine.API.Create "addons-441523"
	I1119 21:49:30.263112  862935 start.go:293] postStartSetup for "addons-441523" (driver="docker")
	I1119 21:49:30.263122  862935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:49:30.263193  862935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:49:30.263242  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.283088  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.383232  862935 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:49:30.386507  862935 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 21:49:30.386534  862935 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 21:49:30.386545  862935 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 21:49:30.386614  862935 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 21:49:30.386636  862935 start.go:296] duration metric: took 123.517497ms for postStartSetup
	I1119 21:49:30.386975  862935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-441523
	I1119 21:49:30.403526  862935 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/config.json ...
	I1119 21:49:30.403833  862935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:49:30.403899  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.421618  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.520066  862935 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 21:49:30.524821  862935 start.go:128] duration metric: took 14.289057311s to createHost
	I1119 21:49:30.524848  862935 start.go:83] releasing machines lock for "addons-441523", held for 14.289208936s
	I1119 21:49:30.524921  862935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-441523
	I1119 21:49:30.542065  862935 ssh_runner.go:195] Run: cat /version.json
	I1119 21:49:30.542117  862935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:49:30.542199  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.542124  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:30.565120  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.575014  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:30.749235  862935 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:30.755769  862935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:49:30.792594  862935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 21:49:30.796827  862935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:49:30.796897  862935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:49:30.824711  862935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 21:49:30.824733  862935 start.go:496] detecting cgroup driver to use...
	I1119 21:49:30.824765  862935 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 21:49:30.824815  862935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:49:30.841771  862935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:49:30.854490  862935 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:49:30.854557  862935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:49:30.873297  862935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:49:30.894328  862935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:49:31.017596  862935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:49:31.153846  862935 docker.go:234] disabling docker service ...
	I1119 21:49:31.153947  862935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:49:31.176473  862935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:49:31.191159  862935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:49:31.311081  862935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:49:31.439695  862935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:49:31.453385  862935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:49:31.468513  862935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:49:31.468633  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.477845  862935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 21:49:31.477957  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.487412  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.496806  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.505940  862935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:49:31.514944  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.524436  862935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.538491  862935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:49:31.547754  862935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:49:31.555675  862935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:49:31.563285  862935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:49:31.679281  862935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:49:31.854519  862935 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:49:31.854597  862935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:49:31.858327  862935 start.go:564] Will wait 60s for crictl version
	I1119 21:49:31.858384  862935 ssh_runner.go:195] Run: which crictl
	I1119 21:49:31.861799  862935 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 21:49:31.885654  862935 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 21:49:31.885828  862935 ssh_runner.go:195] Run: crio --version
	I1119 21:49:31.917697  862935 ssh_runner.go:195] Run: crio --version
	I1119 21:49:31.948924  862935 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 21:49:31.951512  862935 cli_runner.go:164] Run: docker network inspect addons-441523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:49:31.970112  862935 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 21:49:31.973988  862935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:49:31.983664  862935 kubeadm.go:884] updating cluster {Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:49:31.983791  862935 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:49:31.983852  862935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:49:32.021704  862935 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:49:32.021734  862935 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:49:32.021792  862935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:49:32.047029  862935 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:49:32.047055  862935 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:49:32.047063  862935 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 21:49:32.047169  862935 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-441523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:49:32.047260  862935 ssh_runner.go:195] Run: crio config
	I1119 21:49:32.099996  862935 cni.go:84] Creating CNI manager for ""
	I1119 21:49:32.100021  862935 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:49:32.100045  862935 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:49:32.100070  862935 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-441523 NodeName:addons-441523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:49:32.100198  862935 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-441523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:49:32.100279  862935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:49:32.108440  862935 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:49:32.108508  862935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:49:32.116718  862935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 21:49:32.129632  862935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:49:32.142740  862935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1119 21:49:32.155666  862935 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 21:49:32.159218  862935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:49:32.168719  862935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:49:32.282858  862935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:49:32.297974  862935 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523 for IP: 192.168.49.2
	I1119 21:49:32.297997  862935 certs.go:195] generating shared ca certs ...
	I1119 21:49:32.298012  862935 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.298188  862935 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 21:49:32.816911  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt ...
	I1119 21:49:32.816945  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt: {Name:mkf1d98d4e371ceb601e565d414bc633ade7a72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.817842  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key ...
	I1119 21:49:32.817871  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key: {Name:mk592e686c52cc1b9a8e48e3cbd0b8215de1fe61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.818042  862935 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 21:49:32.971558  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt ...
	I1119 21:49:32.971587  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt: {Name:mkd8d269b824ee7e8a1dfa7afa9dcf5651378848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.972408  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key ...
	I1119 21:49:32.972424  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key: {Name:mkf1e3ae1e2bec2690d49e7a1ab5c1df3f001005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:32.972509  862935 certs.go:257] generating profile certs ...
	I1119 21:49:32.972582  862935 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.key
	I1119 21:49:32.972602  862935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt with IP's: []
	I1119 21:49:33.150033  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt ...
	I1119 21:49:33.150066  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: {Name:mkf27ca42c6172695431b1f1ec36368c0c0e561e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:33.151296  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.key ...
	I1119 21:49:33.151313  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.key: {Name:mke8cd1292dace8bab04bed1f3cd3ab58f4af8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:33.151415  862935 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8
	I1119 21:49:33.151439  862935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 21:49:34.361139  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8 ...
	I1119 21:49:34.361172  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8: {Name:mk9140eb20c383f802d1d6b9c0b92851e6b30be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:34.361356  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8 ...
	I1119 21:49:34.361375  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8: {Name:mk55d8e694670224883581104513f3e8439eeabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:34.361457  862935 certs.go:382] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt.5d1e59a8 -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt
	I1119 21:49:34.361535  862935 certs.go:386] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key.5d1e59a8 -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key
	I1119 21:49:34.361588  862935 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key
	I1119 21:49:34.361608  862935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt with IP's: []
	I1119 21:49:35.020592  862935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt ...
	I1119 21:49:35.020627  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt: {Name:mk39ef9d435d3dd27e6848746daffd14262b99a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:35.020811  862935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key ...
	I1119 21:49:35.020825  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key: {Name:mk1deaf58fe1aea64276e2e9480d53503e7e3197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:35.021990  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 21:49:35.022038  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 21:49:35.022063  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:49:35.022089  862935 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 21:49:35.022673  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:49:35.042487  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 21:49:35.061711  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:49:35.079640  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 21:49:35.098019  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 21:49:35.117012  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:49:35.134522  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:49:35.152821  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 21:49:35.170994  862935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:49:35.188647  862935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:49:35.201649  862935 ssh_runner.go:195] Run: openssl version
	I1119 21:49:35.207989  862935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:49:35.216654  862935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:49:35.220482  862935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:49:35.220550  862935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:49:35.261449  862935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:49:35.269943  862935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:49:35.274613  862935 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 21:49:35.274663  862935 kubeadm.go:401] StartCluster: {Name:addons-441523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-441523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:49:35.274752  862935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:35.274811  862935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:35.312135  862935 cri.go:89] found id: ""
	I1119 21:49:35.312206  862935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 21:49:35.321903  862935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 21:49:35.330512  862935 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 21:49:35.330580  862935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 21:49:35.342414  862935 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 21:49:35.342438  862935 kubeadm.go:158] found existing configuration files:
	
	I1119 21:49:35.342488  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 21:49:35.349768  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 21:49:35.349837  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 21:49:35.357059  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 21:49:35.364749  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 21:49:35.364814  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 21:49:35.372047  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 21:49:35.379632  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 21:49:35.379724  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 21:49:35.387038  862935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 21:49:35.394554  862935 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 21:49:35.394638  862935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 21:49:35.402111  862935 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 21:49:35.444194  862935 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 21:49:35.444517  862935 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 21:49:35.467080  862935 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 21:49:35.467163  862935 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 21:49:35.467206  862935 kubeadm.go:319] OS: Linux
	I1119 21:49:35.467258  862935 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 21:49:35.467312  862935 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 21:49:35.467366  862935 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 21:49:35.467419  862935 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 21:49:35.467473  862935 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 21:49:35.467527  862935 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 21:49:35.467579  862935 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 21:49:35.467636  862935 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 21:49:35.467687  862935 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 21:49:35.539746  862935 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 21:49:35.540014  862935 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 21:49:35.540162  862935 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 21:49:35.548873  862935 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 21:49:35.553165  862935 out.go:252]   - Generating certificates and keys ...
	I1119 21:49:35.553340  862935 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 21:49:35.553461  862935 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 21:49:35.760476  862935 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 21:49:36.926098  862935 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 21:49:38.091503  862935 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 21:49:38.245240  862935 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 21:49:38.550400  862935 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 21:49:38.550570  862935 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-441523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:49:39.625769  862935 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 21:49:39.625922  862935 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-441523 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:49:39.878595  862935 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 21:49:40.148816  862935 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 21:49:40.359048  862935 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 21:49:40.359136  862935 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 21:49:40.692131  862935 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 21:49:41.180317  862935 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 21:49:41.457315  862935 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 21:49:41.996427  862935 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 21:49:42.427859  862935 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 21:49:42.428676  862935 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 21:49:42.431526  862935 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 21:49:42.435137  862935 out.go:252]   - Booting up control plane ...
	I1119 21:49:42.435277  862935 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 21:49:42.435374  862935 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 21:49:42.435465  862935 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 21:49:42.452542  862935 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 21:49:42.452889  862935 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 21:49:42.461379  862935 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 21:49:42.461797  862935 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 21:49:42.461851  862935 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 21:49:42.607061  862935 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 21:49:42.607186  862935 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 21:49:43.604492  862935 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001039744s
	I1119 21:49:43.608315  862935 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 21:49:43.608423  862935 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 21:49:43.608816  862935 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 21:49:43.608933  862935 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 21:49:46.851283  862935 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.242632147s
	I1119 21:49:47.914615  862935 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.306310549s
	I1119 21:49:49.610641  862935 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002267046s
	I1119 21:49:49.630485  862935 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 21:49:49.642754  862935 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 21:49:49.657039  862935 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 21:49:49.657298  862935 kubeadm.go:319] [mark-control-plane] Marking the node addons-441523 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 21:49:49.668671  862935 kubeadm.go:319] [bootstrap-token] Using token: gdp8rj.h80e66euj21i98yv
	I1119 21:49:49.671801  862935 out.go:252]   - Configuring RBAC rules ...
	I1119 21:49:49.671947  862935 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 21:49:49.678009  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 21:49:49.690358  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 21:49:49.694696  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 21:49:49.699052  862935 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 21:49:49.703214  862935 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 21:49:50.018305  862935 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 21:49:50.449645  862935 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 21:49:51.017368  862935 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 21:49:51.018666  862935 kubeadm.go:319] 
	I1119 21:49:51.018746  862935 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 21:49:51.018752  862935 kubeadm.go:319] 
	I1119 21:49:51.018833  862935 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 21:49:51.018838  862935 kubeadm.go:319] 
	I1119 21:49:51.018887  862935 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 21:49:51.018951  862935 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 21:49:51.019003  862935 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 21:49:51.019007  862935 kubeadm.go:319] 
	I1119 21:49:51.019064  862935 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 21:49:51.019069  862935 kubeadm.go:319] 
	I1119 21:49:51.019118  862935 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 21:49:51.019123  862935 kubeadm.go:319] 
	I1119 21:49:51.019177  862935 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 21:49:51.019255  862935 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 21:49:51.019326  862935 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 21:49:51.019331  862935 kubeadm.go:319] 
	I1119 21:49:51.019419  862935 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 21:49:51.019499  862935 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 21:49:51.019503  862935 kubeadm.go:319] 
	I1119 21:49:51.019590  862935 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gdp8rj.h80e66euj21i98yv \
	I1119 21:49:51.019697  862935 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 21:49:51.019719  862935 kubeadm.go:319] 	--control-plane 
	I1119 21:49:51.019722  862935 kubeadm.go:319] 
	I1119 21:49:51.019811  862935 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 21:49:51.019816  862935 kubeadm.go:319] 
	I1119 21:49:51.019901  862935 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gdp8rj.h80e66euj21i98yv \
	I1119 21:49:51.020061  862935 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 21:49:51.022652  862935 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 21:49:51.022913  862935 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 21:49:51.023056  862935 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 21:49:51.023083  862935 cni.go:84] Creating CNI manager for ""
	I1119 21:49:51.023092  862935 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:49:51.026388  862935 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 21:49:51.029307  862935 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 21:49:51.033742  862935 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 21:49:51.033762  862935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 21:49:51.049599  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 21:49:51.337813  862935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 21:49:51.337893  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:51.337951  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-441523 minikube.k8s.io/updated_at=2025_11_19T21_49_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=addons-441523 minikube.k8s.io/primary=true
	I1119 21:49:51.354939  862935 ops.go:34] apiserver oom_adj: -16
	I1119 21:49:51.531830  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:52.031970  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:52.532847  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:53.032622  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:53.532683  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:54.031945  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:54.532437  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:55.032000  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:55.532006  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:56.032058  862935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:49:56.152067  862935 kubeadm.go:1114] duration metric: took 4.814246952s to wait for elevateKubeSystemPrivileges
	I1119 21:49:56.152098  862935 kubeadm.go:403] duration metric: took 20.877438665s to StartCluster
	I1119 21:49:56.152117  862935 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:56.152261  862935 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 21:49:56.152769  862935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:49:56.152980  862935 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:49:56.153122  862935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 21:49:56.153396  862935 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:56.153436  862935 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 21:49:56.153537  862935 addons.go:70] Setting yakd=true in profile "addons-441523"
	I1119 21:49:56.153556  862935 addons.go:239] Setting addon yakd=true in "addons-441523"
	I1119 21:49:56.153586  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.154105  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.154614  862935 addons.go:70] Setting metrics-server=true in profile "addons-441523"
	I1119 21:49:56.154639  862935 addons.go:239] Setting addon metrics-server=true in "addons-441523"
	I1119 21:49:56.154663  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.155112  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.155241  862935 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-441523"
	I1119 21:49:56.155261  862935 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-441523"
	I1119 21:49:56.155281  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.155711  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158023  862935 addons.go:70] Setting registry=true in profile "addons-441523"
	I1119 21:49:56.158055  862935 addons.go:239] Setting addon registry=true in "addons-441523"
	I1119 21:49:56.158187  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158431  862935 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-441523"
	I1119 21:49:56.158513  862935 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-441523"
	I1119 21:49:56.158643  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158964  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.159313  862935 addons.go:70] Setting default-storageclass=true in profile "addons-441523"
	I1119 21:49:56.159348  862935 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-441523"
	I1119 21:49:56.159617  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158343  862935 addons.go:70] Setting registry-creds=true in profile "addons-441523"
	I1119 21:49:56.160607  862935 addons.go:239] Setting addon registry-creds=true in "addons-441523"
	I1119 21:49:56.160662  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.161248  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.170146  862935 addons.go:70] Setting gcp-auth=true in profile "addons-441523"
	I1119 21:49:56.170180  862935 mustload.go:66] Loading cluster: addons-441523
	I1119 21:49:56.170386  862935 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:56.170639  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158356  862935 addons.go:70] Setting storage-provisioner=true in profile "addons-441523"
	I1119 21:49:56.175556  862935 addons.go:239] Setting addon storage-provisioner=true in "addons-441523"
	I1119 21:49:56.175597  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.176076  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.182477  862935 addons.go:70] Setting ingress=true in profile "addons-441523"
	I1119 21:49:56.182508  862935 addons.go:239] Setting addon ingress=true in "addons-441523"
	I1119 21:49:56.182559  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.183158  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158363  862935 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-441523"
	I1119 21:49:56.186390  862935 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-441523"
	I1119 21:49:56.186750  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.198940  862935 addons.go:70] Setting ingress-dns=true in profile "addons-441523"
	I1119 21:49:56.198973  862935 addons.go:239] Setting addon ingress-dns=true in "addons-441523"
	I1119 21:49:56.199014  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158369  862935 addons.go:70] Setting volcano=true in profile "addons-441523"
	I1119 21:49:56.199493  862935 addons.go:239] Setting addon volcano=true in "addons-441523"
	I1119 21:49:56.199519  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.199930  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.200222  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158394  862935 addons.go:70] Setting volumesnapshots=true in profile "addons-441523"
	I1119 21:49:56.208504  862935 addons.go:239] Setting addon volumesnapshots=true in "addons-441523"
	I1119 21:49:56.208549  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.209021  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.209171  862935 addons.go:70] Setting inspektor-gadget=true in profile "addons-441523"
	I1119 21:49:56.209189  862935 addons.go:239] Setting addon inspektor-gadget=true in "addons-441523"
	I1119 21:49:56.209209  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.209602  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158413  862935 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-441523"
	I1119 21:49:56.211942  862935 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-441523"
	I1119 21:49:56.211989  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.212479  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.158420  862935 addons.go:70] Setting cloud-spanner=true in profile "addons-441523"
	I1119 21:49:56.299279  862935 addons.go:239] Setting addon cloud-spanner=true in "addons-441523"
	I1119 21:49:56.299360  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.299957  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.301987  862935 addons.go:239] Setting addon default-storageclass=true in "addons-441523"
	I1119 21:49:56.302132  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.158428  862935 out.go:179] * Verifying Kubernetes components...
	I1119 21:49:56.320471  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.325279  862935 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 21:49:56.334941  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 21:49:56.335010  862935 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 21:49:56.335119  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.335310  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.363012  862935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:49:56.363345  862935 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 21:49:56.367099  862935 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:49:56.367123  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 21:49:56.367219  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.377455  862935 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 21:49:56.378512  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.387111  862935 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 21:49:56.389881  862935 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 21:49:56.390000  862935 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:49:56.390011  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 21:49:56.390085  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.390175  862935 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 21:49:56.383187  862935 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 21:49:56.393067  862935 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:49:56.393086  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 21:49:56.393151  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	W1119 21:49:56.383338  862935 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 21:49:56.383480  862935 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:49:56.399915  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 21:49:56.400000  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.432137  862935 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 21:49:56.435033  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 21:49:56.435057  862935 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 21:49:56.435127  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.436042  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 21:49:56.436221  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 21:49:56.436451  862935 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 21:49:56.436471  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 21:49:56.436540  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.334946  862935 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 21:49:56.450146  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.453595  862935 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:49:56.477728  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 21:49:56.477805  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.495421  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 21:49:56.459289  862935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 21:49:56.464681  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 21:49:56.498376  862935 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 21:49:56.498461  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.475365  862935 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 21:49:56.517629  862935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 21:49:56.517712  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.541597  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 21:49:56.541862  862935 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 21:49:56.551022  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.551522  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:49:56.553434  862935 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 21:49:56.553463  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 21:49:56.553531  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.559839  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 21:49:56.561118  862935 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-441523"
	I1119 21:49:56.561204  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:49:56.561707  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:49:56.578308  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:49:56.581331  862935 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:49:56.581355  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 21:49:56.581421  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.600035  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.603381  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.607061  862935 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 21:49:56.607112  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 21:49:56.609543  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.611501  862935 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:49:56.611518  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 21:49:56.611578  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.611992  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.615969  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 21:49:56.623047  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 21:49:56.627524  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 21:49:56.633511  862935 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 21:49:56.636413  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 21:49:56.636438  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 21:49:56.636504  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.653783  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.682476  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.703042  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.718614  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.727905  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.745047  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.752749  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.758183  862935 out.go:179]   - Using image docker.io/busybox:stable
	I1119 21:49:56.761242  862935 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 21:49:56.764728  862935 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:49:56.764754  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 21:49:56.764816  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:49:56.780646  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	W1119 21:49:56.783051  862935 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 21:49:56.783081  862935 retry.go:31] will retry after 198.173201ms: ssh: handshake failed: EOF
	I1119 21:49:56.799097  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:49:56.944954  862935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:49:57.312019  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:49:57.339206  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:49:57.390576  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 21:49:57.390601  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 21:49:57.431625  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 21:49:57.435538  862935 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 21:49:57.435567  862935 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 21:49:57.440544  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:49:57.460793  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:49:57.504714  862935 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 21:49:57.504742  862935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 21:49:57.515045  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 21:49:57.515073  862935 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 21:49:57.570845  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:49:57.573070  862935 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:49:57.573102  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 21:49:57.608019  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:49:57.613338  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 21:49:57.613366  862935 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 21:49:57.638648  862935 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:49:57.638676  862935 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 21:49:57.676324  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 21:49:57.690004  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:49:57.757214  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 21:49:57.757250  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 21:49:57.769885  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:49:57.777661  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:49:57.844156  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 21:49:57.844204  862935 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 21:49:57.860258  862935 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 21:49:57.860287  862935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 21:49:57.881428  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:49:58.020698  862935 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 21:49:58.020769  862935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 21:49:58.022022  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 21:49:58.022083  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 21:49:58.067019  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 21:49:58.067086  862935 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 21:49:58.204547  862935 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:49:58.204617  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 21:49:58.209478  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 21:49:58.209544  862935 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 21:49:58.214587  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 21:49:58.214657  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 21:49:58.382706  862935 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:49:58.382774  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 21:49:58.385524  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:49:58.389882  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 21:49:58.389950  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 21:49:58.393062  862935 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.897302196s)
	I1119 21:49:58.393179  862935 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 21:49:58.393138  862935 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.44812185s)
	I1119 21:49:58.394022  862935 node_ready.go:35] waiting up to 6m0s for node "addons-441523" to be "Ready" ...
	I1119 21:49:58.630237  862935 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 21:49:58.630303  862935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 21:49:58.669002  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:49:58.805435  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 21:49:58.805462  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 21:49:58.901675  862935 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-441523" context rescaled to 1 replicas
	I1119 21:49:59.076224  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 21:49:59.076294  862935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 21:49:59.307796  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 21:49:59.307859  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 21:49:59.410590  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 21:49:59.410661  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 21:49:59.572766  862935 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:49:59.572833  862935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 21:49:59.733754  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1119 21:50:00.411106  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:00.956029  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.643970145s)
	I1119 21:50:01.563873  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.123284226s)
	I1119 21:50:01.563945  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.103091174s)
	I1119 21:50:01.564008  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.132179348s)
	I1119 21:50:01.564285  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.225049408s)
	I1119 21:50:01.609577  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.038662224s)
	I1119 21:50:01.609789  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.001737033s)
	I1119 21:50:01.609867  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.93351055s)
	I1119 21:50:02.367900  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.590211141s)
	I1119 21:50:02.367932  862935 addons.go:480] Verifying addon registry=true in "addons-441523"
	I1119 21:50:02.367860  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.59793769s)
	I1119 21:50:02.368204  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.486731786s)
	I1119 21:50:02.368218  862935 addons.go:480] Verifying addon metrics-server=true in "addons-441523"
	I1119 21:50:02.368257  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.982673996s)
	I1119 21:50:02.368327  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.678298521s)
	I1119 21:50:02.368353  862935 addons.go:480] Verifying addon ingress=true in "addons-441523"
	I1119 21:50:02.368613  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.699525411s)
	W1119 21:50:02.368646  862935 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:50:02.368664  862935 retry.go:31] will retry after 180.609841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:50:02.371187  862935 out.go:179] * Verifying registry addon...
	I1119 21:50:02.373241  862935 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-441523 service yakd-dashboard -n yakd-dashboard
	
	I1119 21:50:02.373283  862935 out.go:179] * Verifying ingress addon...
	I1119 21:50:02.376099  862935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 21:50:02.378074  862935 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 21:50:02.387273  862935 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:50:02.387301  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:02.387498  862935 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 21:50:02.387517  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:02.549976  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:50:02.821075  862935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.087201536s)
	I1119 21:50:02.821112  862935 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-441523"
	I1119 21:50:02.824208  862935 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 21:50:02.828071  862935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 21:50:02.840866  862935 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:50:02.840904  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 21:50:02.902116  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:02.942677  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:02.943398  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:03.331397  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:03.380466  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:03.385005  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:03.832139  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:03.879025  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:03.881118  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:03.987885  862935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 21:50:03.987972  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:50:04.009333  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:50:04.115773  862935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 21:50:04.128834  862935 addons.go:239] Setting addon gcp-auth=true in "addons-441523"
	I1119 21:50:04.128882  862935 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:50:04.129353  862935 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:50:04.147129  862935 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 21:50:04.147186  862935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:50:04.164491  862935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:50:04.266002  862935 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:50:04.268886  862935 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 21:50:04.271629  862935 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 21:50:04.271658  862935 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 21:50:04.284991  862935 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 21:50:04.285013  862935 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 21:50:04.299015  862935 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:50:04.299039  862935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 21:50:04.311419  862935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:50:04.331683  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:04.379592  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:04.382824  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:04.825562  862935 addons.go:480] Verifying addon gcp-auth=true in "addons-441523"
	I1119 21:50:04.828679  862935 out.go:179] * Verifying gcp-auth addon...
	I1119 21:50:04.832525  862935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 21:50:04.838824  862935 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 21:50:04.838850  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:04.839019  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:04.879181  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:04.881746  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:05.331730  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:05.336314  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:05.379164  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:05.381127  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:05.397100  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:05.831593  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:05.836039  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:05.879987  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:05.881355  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:06.332034  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:06.335452  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:06.379274  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:06.381403  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:06.831615  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:06.835467  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:06.880353  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:06.881833  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:07.331233  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:07.335789  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:07.380215  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:07.381955  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:07.831564  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:07.835472  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:07.879497  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:07.882444  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:07.898637  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:08.331885  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:08.335194  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:08.379988  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:08.381031  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:08.832571  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:08.834974  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:08.879879  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:08.881602  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:09.332289  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:09.335861  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:09.380555  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:09.382052  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:09.831763  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:09.835340  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:09.879947  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:09.882270  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:09.911144  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:10.331172  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:10.336191  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:10.379087  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:10.381038  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:10.831259  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:10.835864  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:10.879502  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:10.881651  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:11.332206  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:11.335966  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:11.379501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:11.381511  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:11.832044  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:11.835672  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:11.879426  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:11.881437  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:12.331556  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:12.336108  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:12.379822  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:12.380786  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:12.397390  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:12.831632  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:12.835303  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:12.879293  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:12.881593  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:13.331524  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:13.335078  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:13.379690  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:13.381073  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:13.831623  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:13.836194  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:13.879713  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:13.881728  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:14.331150  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:14.335991  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:14.378753  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:14.381027  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:14.831321  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:14.836252  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:14.878811  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:14.880887  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:14.899318  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:15.331568  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:15.335969  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:15.379933  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:15.380637  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:15.831738  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:15.835395  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:15.878965  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:15.881162  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:16.331458  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:16.335843  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:16.379413  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:16.381616  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:16.831413  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:16.836168  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:16.880075  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:16.881347  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:16.899865  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:17.330940  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:17.336343  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:17.379444  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:17.381775  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:17.832238  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:17.835787  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:17.880993  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:17.881108  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:18.332462  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:18.336073  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:18.379769  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:18.380942  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:18.831912  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:18.835694  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:18.880098  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:18.882741  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:19.330928  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:19.335472  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:19.379348  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:19.381547  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:19.397407  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:19.831813  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:19.835937  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:19.879795  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:19.881218  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:20.331114  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:20.335656  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:20.379422  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:20.381587  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:20.831590  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:20.835089  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:20.878920  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:20.880998  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:21.331343  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:21.335909  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:21.379779  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:21.380805  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:21.397686  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:21.832264  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:21.835824  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:21.879972  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:21.881536  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:22.331315  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:22.335887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:22.380072  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:22.381554  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:22.831259  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:22.835923  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:22.879924  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:22.881201  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:23.330970  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:23.335460  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:23.379391  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:23.381601  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:23.832181  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:23.835849  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:23.879436  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:23.881868  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:23.899063  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:24.331164  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:24.336142  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:24.380751  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:24.381252  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:24.831343  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:24.836240  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:24.879145  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:24.881597  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:25.330830  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:25.335356  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:25.378854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:25.381222  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:25.831304  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:25.836300  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:25.880167  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:25.881446  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:26.331258  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:26.335932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:26.380732  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:26.381469  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1119 21:50:26.397386  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:26.831698  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:26.835509  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:26.879459  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:26.883298  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:27.331778  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:27.335534  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:27.380007  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:27.381828  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:27.831662  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:27.835219  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:27.880004  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:27.881457  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:28.331932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:28.338977  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:28.379762  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:28.382006  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:28.397626  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:28.831986  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:28.835640  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:28.879444  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:28.882204  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:29.334525  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:29.336490  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:29.379457  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:29.381694  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:29.831592  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:29.835155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:29.879109  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:29.881126  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:30.335286  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:30.336657  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:30.379460  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:30.381868  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:30.397859  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:30.831444  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:30.836109  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:30.880336  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:30.881216  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:31.332113  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:31.335492  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:31.380288  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:31.381801  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:31.831612  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:31.836303  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:31.879010  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:31.881600  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:32.331309  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:32.336065  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:32.379950  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:32.381378  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:32.831074  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:32.835792  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:32.879906  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:32.881194  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:32.898764  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:33.331854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:33.335588  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:33.379252  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:33.381461  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:33.831397  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:33.836234  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:33.878825  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:33.881000  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:34.331854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:34.335563  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:34.379253  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:34.381391  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:34.831131  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:34.835540  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:34.879170  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:34.881130  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:35.331536  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:35.335384  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:35.379054  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:35.381276  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:50:35.397066  862935 node_ready.go:57] node "addons-441523" has "Ready":"False" status (will retry)
	I1119 21:50:35.830763  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:35.835606  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:35.879104  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:35.881358  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:36.331327  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:36.336051  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:36.380119  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:36.381726  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:36.831838  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:36.835991  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:36.879495  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:36.881692  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:37.358171  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:37.362152  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:37.417157  862935 node_ready.go:49] node "addons-441523" is "Ready"
	I1119 21:50:37.417189  862935 node_ready.go:38] duration metric: took 39.023121117s for node "addons-441523" to be "Ready" ...
	I1119 21:50:37.417203  862935 api_server.go:52] waiting for apiserver process to appear ...
	I1119 21:50:37.417277  862935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:50:37.441851  862935 api_server.go:72] duration metric: took 41.288833011s to wait for apiserver process to appear ...
	I1119 21:50:37.441881  862935 api_server.go:88] waiting for apiserver healthz status ...
	I1119 21:50:37.441902  862935 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 21:50:37.443787  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:37.492870  862935 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 21:50:37.502705  862935 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:50:37.502733  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:37.522337  862935 api_server.go:141] control plane version: v1.34.1
	I1119 21:50:37.522374  862935 api_server.go:131] duration metric: took 80.483579ms to wait for apiserver health ...
	I1119 21:50:37.522395  862935 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 21:50:37.574647  862935 system_pods.go:59] 18 kube-system pods found
	I1119 21:50:37.574684  862935 system_pods.go:61] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending
	I1119 21:50:37.574692  862935 system_pods.go:61] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending
	I1119 21:50:37.574707  862935 system_pods.go:61] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending
	I1119 21:50:37.574712  862935 system_pods.go:61] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:37.574717  862935 system_pods.go:61] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:37.574721  862935 system_pods.go:61] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:37.574725  862935 system_pods.go:61] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:37.574731  862935 system_pods.go:61] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending
	I1119 21:50:37.574736  862935 system_pods.go:61] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:37.574741  862935 system_pods.go:61] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:37.574745  862935 system_pods.go:61] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending
	I1119 21:50:37.574758  862935 system_pods.go:61] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:37.574763  862935 system_pods.go:61] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending
	I1119 21:50:37.574767  862935 system_pods.go:61] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending
	I1119 21:50:37.574805  862935 system_pods.go:61] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:37.574813  862935 system_pods.go:61] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending
	I1119 21:50:37.574818  862935 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending
	I1119 21:50:37.574822  862935 system_pods.go:61] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending
	I1119 21:50:37.574827  862935 system_pods.go:74] duration metric: took 52.425217ms to wait for pod list to return data ...
	I1119 21:50:37.574839  862935 default_sa.go:34] waiting for default service account to be created ...
	I1119 21:50:37.653063  862935 default_sa.go:45] found service account: "default"
	I1119 21:50:37.653091  862935 default_sa.go:55] duration metric: took 78.245632ms for default service account to be created ...
	I1119 21:50:37.653102  862935 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 21:50:37.753125  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:37.753169  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:37.753176  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending
	I1119 21:50:37.753182  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending
	I1119 21:50:37.753186  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending
	I1119 21:50:37.753190  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:37.753195  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:37.753199  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:37.753203  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:37.753211  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending
	I1119 21:50:37.753214  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:37.753218  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:37.753241  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending
	I1119 21:50:37.753246  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:37.753250  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending
	I1119 21:50:37.753260  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending
	I1119 21:50:37.753264  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:37.753267  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending
	I1119 21:50:37.753271  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending
	I1119 21:50:37.753275  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending
	I1119 21:50:37.753292  862935 retry.go:31] will retry after 220.650746ms: missing components: kube-dns
	I1119 21:50:37.854711  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:37.855189  862935 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:50:37.855210  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:37.905716  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:37.909085  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:37.981117  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:37.981157  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:37.981165  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending
	I1119 21:50:37.981171  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending
	I1119 21:50:37.981176  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending
	I1119 21:50:37.981188  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:37.981197  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:37.981202  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:37.981213  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:37.981220  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:37.981225  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:37.981237  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:37.981241  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending
	I1119 21:50:37.981245  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:37.981251  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:37.981255  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending
	I1119 21:50:37.981265  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:37.981270  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending
	I1119 21:50:37.981274  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending
	I1119 21:50:37.981282  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:50:37.981304  862935 retry.go:31] will retry after 273.024117ms: missing components: kube-dns
	I1119 21:50:38.258492  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:38.258530  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:38.258539  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:38.258547  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:38.258559  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending
	I1119 21:50:38.258569  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:38.258575  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:38.258586  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:38.258591  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:38.258598  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:38.258607  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:38.258611  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:38.258618  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:38.258626  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending
	I1119 21:50:38.258640  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:38.258646  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:38.258652  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending
	I1119 21:50:38.258659  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.258671  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.258679  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:50:38.258698  862935 retry.go:31] will retry after 342.667594ms: missing components: kube-dns
	I1119 21:50:38.340307  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:38.340410  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:38.443751  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:38.444105  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:38.609068  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:38.609103  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:38.609112  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:38.609119  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:38.609135  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:38.609147  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:38.609160  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:38.609166  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:38.609170  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:38.609183  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:38.609187  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:38.609192  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:38.609210  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:38.609226  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:38.609232  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:38.609239  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:38.609250  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:38.609257  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.609268  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.609274  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:50:38.609300  862935 retry.go:31] will retry after 378.765863ms: missing components: kube-dns
	I1119 21:50:38.837575  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:38.838753  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:38.880809  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:38.883989  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:38.995572  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:38.995617  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:38.995626  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:38.995635  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:38.995643  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:38.995648  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:38.995653  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:38.995658  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:38.995673  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:38.995685  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:38.995689  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:38.995694  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:38.995707  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:38.995714  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:38.995722  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:38.995733  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:38.995746  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:38.995753  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.995770  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:38.995781  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Running
	I1119 21:50:38.995796  862935 retry.go:31] will retry after 717.350866ms: missing components: kube-dns
	I1119 21:50:39.331955  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:39.335854  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:39.380697  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:39.383536  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:39.720226  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:39.720261  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:50:39.720278  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:39.720293  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:39.720301  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:39.720316  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:39.720326  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:39.720331  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:39.720337  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:39.720348  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:39.720352  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:39.720357  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:39.720371  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:39.720383  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:39.720390  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:39.720398  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:39.720406  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:39.720412  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:39.720423  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:39.720427  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Running
	I1119 21:50:39.720449  862935 retry.go:31] will retry after 946.683909ms: missing components: kube-dns
	I1119 21:50:39.832973  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:39.836337  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:39.933950  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:39.934404  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:40.331932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:40.335862  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:40.379970  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:40.381676  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:40.672502  862935 system_pods.go:86] 19 kube-system pods found
	I1119 21:50:40.672591  862935 system_pods.go:89] "coredns-66bc5c9577-dcqc5" [6e44afdc-2a7c-46bc-a607-243ce8810bc4] Running
	I1119 21:50:40.672617  862935 system_pods.go:89] "csi-hostpath-attacher-0" [212d19b8-b8e5-4408-945d-635faaa491ee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:50:40.672666  862935 system_pods.go:89] "csi-hostpath-resizer-0" [55be4005-775e-45a8-899d-98c05453099a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:50:40.672697  862935 system_pods.go:89] "csi-hostpathplugin-k94bt" [4ef66713-9f71-4503-965b-786ec9ae5d88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:50:40.672724  862935 system_pods.go:89] "etcd-addons-441523" [8aa88aa4-ec7d-4018-8353-8abf76d28e04] Running
	I1119 21:50:40.672757  862935 system_pods.go:89] "kindnet-kz24p" [7a836ba7-bbeb-4083-8430-b8db1db2f05a] Running
	I1119 21:50:40.672781  862935 system_pods.go:89] "kube-apiserver-addons-441523" [9c06b678-161e-4e6c-bd2e-ec41841cdcd9] Running
	I1119 21:50:40.672807  862935 system_pods.go:89] "kube-controller-manager-addons-441523" [67c30c8e-a8b0-47e7-987c-6bd9882bf03a] Running
	I1119 21:50:40.672850  862935 system_pods.go:89] "kube-ingress-dns-minikube" [913fc3ba-7549-4a61-9469-ebc9561791d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:50:40.672877  862935 system_pods.go:89] "kube-proxy-v4ctw" [223d61f9-598c-4926-9bd7-9462399c4157] Running
	I1119 21:50:40.672903  862935 system_pods.go:89] "kube-scheduler-addons-441523" [6bc7e3ef-fef0-4973-9b3c-7694607cddd3] Running
	I1119 21:50:40.672938  862935 system_pods.go:89] "metrics-server-85b7d694d7-sph2x" [3e63d5a2-fd27-4d60-a485-d85e1a4bb06f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:50:40.672965  862935 system_pods.go:89] "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:50:40.672992  862935 system_pods.go:89] "registry-6b586f9694-nmljk" [c97c903e-5f54-424d-9e36-1b29085bd237] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:50:40.673026  862935 system_pods.go:89] "registry-creds-764b6fb674-7msrk" [0664d29c-371e-4498-9492-5bf78cd26131] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:50:40.673050  862935 system_pods.go:89] "registry-proxy-9279r" [837d3a76-d090-4e96-af26-46911fe9a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:50:40.673083  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p69nx" [115d18e6-3bf9-40b4-8a14-f687d4e070ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:40.673118  862935 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tvq5m" [15df629c-64b0-44b5-a926-e25a9a0fd8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:50:40.673141  862935 system_pods.go:89] "storage-provisioner" [5da34f04-6699-44b3-9e94-954b532f4fbd] Running
	I1119 21:50:40.673167  862935 system_pods.go:126] duration metric: took 3.020058043s to wait for k8s-apps to be running ...
	I1119 21:50:40.673202  862935 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 21:50:40.673296  862935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:50:40.688500  862935 system_svc.go:56] duration metric: took 15.288635ms WaitForService to wait for kubelet
	I1119 21:50:40.688632  862935 kubeadm.go:587] duration metric: took 44.535616869s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:50:40.688671  862935 node_conditions.go:102] verifying NodePressure condition ...
	I1119 21:50:40.691759  862935 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 21:50:40.691838  862935 node_conditions.go:123] node cpu capacity is 2
	I1119 21:50:40.691878  862935 node_conditions.go:105] duration metric: took 3.170439ms to run NodePressure ...
	I1119 21:50:40.691921  862935 start.go:242] waiting for startup goroutines ...
	I1119 21:50:40.832968  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:40.835710  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:40.880018  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:40.882810  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:41.336313  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:41.432420  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:41.433052  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:41.433249  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:41.835611  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:41.835698  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:41.879670  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:41.881544  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:42.332567  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:42.335455  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:42.381662  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:42.383443  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:42.832117  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:42.835155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:42.880167  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:42.882312  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:43.332396  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:43.336497  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:43.381212  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:43.384016  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:43.832231  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:43.836525  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:43.881510  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:43.884284  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:44.332166  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:44.335608  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:44.379434  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:44.381831  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:44.832073  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:44.835833  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:44.881676  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:44.881811  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:45.335243  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:45.338415  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:45.435392  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:45.435714  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:45.831663  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:45.836242  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:45.879733  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:45.882433  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:46.332502  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:46.336251  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:46.381147  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:46.382919  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:46.832687  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:46.835778  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:46.882928  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:46.883519  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:47.338551  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:47.434840  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:47.434989  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:47.435106  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:47.831742  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:47.836195  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:47.879821  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:47.881912  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:48.333127  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:48.336027  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:48.381457  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:48.383145  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:48.831859  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:48.835891  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:48.880349  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:48.881051  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:49.332738  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:49.335611  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:49.380327  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:49.383397  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:49.833555  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:49.836512  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:49.880226  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:49.883389  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:50.332207  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:50.335978  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:50.382038  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:50.382680  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:50.832125  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:50.836475  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:50.880915  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:50.883147  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:51.331968  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:51.336067  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:51.379233  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:51.381867  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:51.832266  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:51.835782  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:51.881028  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:51.882291  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:52.332976  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:52.335691  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:52.381445  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:52.382460  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:52.832806  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:52.835285  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:52.880764  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:52.882588  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:53.332146  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:53.335729  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:53.379505  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:53.381563  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:53.832164  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:53.835177  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:53.880598  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:53.881841  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:54.331794  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:54.336026  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:54.382812  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:54.383216  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:54.832495  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:54.836486  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:54.880718  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:54.882028  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:55.331581  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:55.335359  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:55.380566  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:55.382399  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:55.831598  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:55.835458  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:55.883990  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:55.885204  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:56.333238  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:56.336483  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:56.380551  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:56.384548  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:56.838659  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:56.842293  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:56.940055  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:56.940570  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:57.331849  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:57.344619  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:57.379682  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:57.381851  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:57.837210  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:57.837309  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:57.879733  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:57.882358  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:58.332412  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:58.336268  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:58.380985  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:58.382998  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:58.842939  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:58.843176  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:58.882563  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:58.882988  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:59.332642  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:59.335829  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:59.382183  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:59.383401  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:50:59.831850  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:50:59.835464  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:50:59.879958  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:50:59.883061  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:00.335487  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:00.354041  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:00.384063  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:00.385420  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:00.833346  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:00.836186  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:00.879480  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:00.882398  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:01.332369  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:01.336885  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:01.382481  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:01.382719  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:01.832990  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:01.835566  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:01.882524  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:01.883320  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:02.332250  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:02.336626  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:02.382136  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:02.382584  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:02.832844  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:02.835382  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:02.881897  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:02.884159  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:03.332228  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:03.336017  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:03.379521  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:03.382391  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:03.832758  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:03.835520  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:03.880486  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:03.883648  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:04.331379  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:04.335929  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:04.383686  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:04.384182  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:04.833020  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:04.835880  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:04.885552  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:04.886426  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:05.336709  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:05.341781  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:05.384625  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:05.385549  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:05.840119  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:05.840533  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:05.880390  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:05.886908  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:06.332666  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:06.335467  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:06.381351  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:51:06.383629  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:06.833171  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:06.835510  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:06.883496  862935 kapi.go:107] duration metric: took 1m4.507394384s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 21:51:06.888505  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:07.332274  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:07.335876  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:07.381770  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:07.832501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:07.835806  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:07.902623  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:08.332700  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:08.335218  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:08.381435  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:08.831887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:08.835395  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:08.881836  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:09.332768  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:09.335335  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:09.381824  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:09.831796  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:09.835205  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:09.881841  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:10.338887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:10.339441  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:10.433212  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:10.832499  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:10.836598  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:10.882349  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:11.332675  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:11.335248  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:11.381739  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:11.832731  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:11.838295  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:11.881718  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:12.331582  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:12.335125  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:12.382914  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:12.832411  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:12.836430  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:12.881833  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:13.331347  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:13.336574  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:13.382091  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:13.839846  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:13.840554  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:13.882242  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:14.331957  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:14.335781  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:14.381921  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:14.831659  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:14.835472  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:14.882266  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:15.331803  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:15.335172  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:15.381688  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:15.832007  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:15.835509  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:15.882142  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:16.332286  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:16.335817  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:16.382355  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:16.834501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:16.837289  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:16.933573  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:17.337411  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:17.342145  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:17.382495  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:17.834308  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:17.836383  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:17.882664  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:18.337342  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:18.337766  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:18.436937  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:18.831871  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:18.835961  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:18.884312  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:19.336373  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:19.336494  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:19.385643  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:19.833108  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:19.837186  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:19.885038  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:20.334358  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:20.337341  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:20.388453  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:20.836507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:20.837113  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:20.883800  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:21.344655  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:21.345147  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:21.381552  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:21.832932  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:21.835500  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:21.881690  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:22.332309  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:22.336187  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:22.381646  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:22.832342  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:22.836388  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:22.933902  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:23.332349  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:23.336554  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:23.381641  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:23.831735  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:23.836352  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:23.882287  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:24.333803  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:24.335844  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:24.382943  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:24.831131  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:24.835511  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:24.885848  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:25.332097  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:25.335579  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:25.383630  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:25.832981  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:25.835065  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:25.882007  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:26.336175  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:26.336792  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:26.382627  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:26.832259  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:26.835886  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:26.882656  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:27.333099  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:27.334975  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:27.380854  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:27.832699  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:27.835129  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:27.881755  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:28.331572  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:28.335189  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:28.381550  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:28.832596  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:51:28.835496  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:28.881914  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:29.331193  862935 kapi.go:107] duration metric: took 1m26.503121936s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 21:51:29.336632  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:29.382011  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:29.836507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:29.881480  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:30.335752  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:30.381943  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:30.835816  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:30.881778  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:31.336349  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:31.381277  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:31.835592  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:31.881617  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:32.335923  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:32.381960  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:32.836518  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:32.882418  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:33.336029  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:33.382087  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:33.835948  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:33.882282  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:34.335501  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:34.381757  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:34.836059  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:34.881760  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:35.336473  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:35.381546  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:35.836337  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:35.881317  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:36.335507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:36.381585  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:36.835873  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:36.881996  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:37.336716  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:37.382068  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:37.835590  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:37.881921  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:38.336310  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:38.381484  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:38.835155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:38.881063  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:39.335672  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:39.382014  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:39.837377  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:39.881592  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:40.336445  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:40.381305  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:40.835637  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:40.883212  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:41.335185  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:41.381736  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:41.836480  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:41.881775  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:42.337460  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:42.382498  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:42.835621  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:42.881763  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:43.336062  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:43.381430  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:43.835507  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:43.881550  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:44.335637  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:44.381737  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:44.836054  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:44.881785  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:45.336522  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:45.382122  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:45.836541  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:45.881651  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:46.335898  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:46.382079  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:46.835672  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:46.883881  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:47.335887  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:47.382573  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:47.836559  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:47.882358  862935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:51:48.336155  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:48.382237  862935 kapi.go:107] duration metric: took 1m46.004160603s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 21:51:48.835696  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:49.336278  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:49.836325  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:50.336494  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:50.836138  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:51.336734  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:51.836622  862935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:51:52.336179  862935 kapi.go:107] duration metric: took 1m47.503652652s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 21:51:52.339275  862935 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-441523 cluster.
	I1119 21:51:52.342090  862935 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 21:51:52.344995  862935 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 21:51:52.347879  862935 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, registry-creds, storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1119 21:51:52.350655  862935 addons.go:515] duration metric: took 1m56.197195738s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin registry-creds storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1119 21:51:52.350753  862935 start.go:247] waiting for cluster config update ...
	I1119 21:51:52.350789  862935 start.go:256] writing updated cluster config ...
	I1119 21:51:52.351166  862935 ssh_runner.go:195] Run: rm -f paused
	I1119 21:51:52.355889  862935 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:51:52.359650  862935 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dcqc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.364723  862935 pod_ready.go:94] pod "coredns-66bc5c9577-dcqc5" is "Ready"
	I1119 21:51:52.364750  862935 pod_ready.go:86] duration metric: took 5.07169ms for pod "coredns-66bc5c9577-dcqc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.367012  862935 pod_ready.go:83] waiting for pod "etcd-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.371421  862935 pod_ready.go:94] pod "etcd-addons-441523" is "Ready"
	I1119 21:51:52.371449  862935 pod_ready.go:86] duration metric: took 4.410717ms for pod "etcd-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.373486  862935 pod_ready.go:83] waiting for pod "kube-apiserver-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.377912  862935 pod_ready.go:94] pod "kube-apiserver-addons-441523" is "Ready"
	I1119 21:51:52.377939  862935 pod_ready.go:86] duration metric: took 4.419562ms for pod "kube-apiserver-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.380513  862935 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.760276  862935 pod_ready.go:94] pod "kube-controller-manager-addons-441523" is "Ready"
	I1119 21:51:52.760310  862935 pod_ready.go:86] duration metric: took 379.772736ms for pod "kube-controller-manager-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:52.960794  862935 pod_ready.go:83] waiting for pod "kube-proxy-v4ctw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.360249  862935 pod_ready.go:94] pod "kube-proxy-v4ctw" is "Ready"
	I1119 21:51:53.360278  862935 pod_ready.go:86] duration metric: took 399.451346ms for pod "kube-proxy-v4ctw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.560818  862935 pod_ready.go:83] waiting for pod "kube-scheduler-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.960784  862935 pod_ready.go:94] pod "kube-scheduler-addons-441523" is "Ready"
	I1119 21:51:53.960815  862935 pod_ready.go:86] duration metric: took 399.971312ms for pod "kube-scheduler-addons-441523" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:51:53.960829  862935 pod_ready.go:40] duration metric: took 1.604908866s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:51:54.029547  862935 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 21:51:54.032598  862935 out.go:179] * Done! kubectl is now configured to use "addons-441523" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 21:52:22 addons-441523 crio[829]: time="2025-11-19T21:52:22.619609753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:22 addons-441523 crio[829]: time="2025-11-19T21:52:22.620168678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:22 addons-441523 crio[829]: time="2025-11-19T21:52:22.638706767Z" level=info msg="Created container 3aab8bd879c8578527b1275c7cc5ca229d71c662259caeefad83233a5bbbf0f7: default/test-local-path/busybox" id=0a82f41c-a90f-43ba-903f-159d591eaa56 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:22 addons-441523 crio[829]: time="2025-11-19T21:52:22.639711679Z" level=info msg="Starting container: 3aab8bd879c8578527b1275c7cc5ca229d71c662259caeefad83233a5bbbf0f7" id=30c195cc-aaa0-4a76-bc22-1b81a2799018 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:52:22 addons-441523 crio[829]: time="2025-11-19T21:52:22.64137561Z" level=info msg="Started container" PID=5576 containerID=3aab8bd879c8578527b1275c7cc5ca229d71c662259caeefad83233a5bbbf0f7 description=default/test-local-path/busybox id=30c195cc-aaa0-4a76-bc22-1b81a2799018 name=/runtime.v1.RuntimeService/StartContainer sandboxID=83b3b008c0e9327720e22da354dfc24d4481a0a2d1d3e0e99d22bcc090a988de
	Nov 19 21:52:24 addons-441523 crio[829]: time="2025-11-19T21:52:24.371916438Z" level=info msg="Stopping pod sandbox: 83b3b008c0e9327720e22da354dfc24d4481a0a2d1d3e0e99d22bcc090a988de" id=66dedc56-f08b-4cfd-b35d-31e42130d528 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:52:24 addons-441523 crio[829]: time="2025-11-19T21:52:24.372231134Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:83b3b008c0e9327720e22da354dfc24d4481a0a2d1d3e0e99d22bcc090a988de UID:5041a0b0-d5ed-4ee8-bb83-d82535332819 NetNS:/var/run/netns/026acc99-46e6-4e5b-8289-ad83460a8798 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cd18}] Aliases:map[]}"
	Nov 19 21:52:24 addons-441523 crio[829]: time="2025-11-19T21:52:24.372376153Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 19 21:52:24 addons-441523 crio[829]: time="2025-11-19T21:52:24.400990805Z" level=info msg="Stopped pod sandbox: 83b3b008c0e9327720e22da354dfc24d4481a0a2d1d3e0e99d22bcc090a988de" id=66dedc56-f08b-4cfd-b35d-31e42130d528 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.587981396Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9/POD" id=613f4e60-5830-43db-ae71-97c854df9b77 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.588051214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.604506051Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9 Namespace:local-path-storage ID:dbe2194ce8b2aafae2fbb38553d448005a915d176bb12ca8ab38f33021caa3a3 UID:b4932884-f5a1-49ee-9f1e-99cd57a65a2d NetNS:/var/run/netns/ffa57aad-ecc7-41b8-a3d3-c81d4b315269 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d218}] Aliases:map[]}"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.604542236Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9 to CNI network \"kindnet\" (type=ptp)"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.631840867Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9 Namespace:local-path-storage ID:dbe2194ce8b2aafae2fbb38553d448005a915d176bb12ca8ab38f33021caa3a3 UID:b4932884-f5a1-49ee-9f1e-99cd57a65a2d NetNS:/var/run/netns/ffa57aad-ecc7-41b8-a3d3-c81d4b315269 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d218}] Aliases:map[]}"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.632317009Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9 for CNI network kindnet (type=ptp)"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.638108346Z" level=info msg="Ran pod sandbox dbe2194ce8b2aafae2fbb38553d448005a915d176bb12ca8ab38f33021caa3a3 with infra container: local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9/POD" id=613f4e60-5830-43db-ae71-97c854df9b77 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.65168843Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=54f7ebcc-22c2-4f17-a8e2-23d211fef345 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.653329928Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=00cdb7e9-a805-4b53-86f4-98ab3dfa612f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.658985346Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9/helper-pod" id=511c1c03-feac-4bed-bab9-94d978971d6b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.659252623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.672847321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.6735639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.693826952Z" level=info msg="Created container 8fe913529eb7b084d68ced6a0df479101eff8491d4fab6b9671e002d1b2aea6a: local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9/helper-pod" id=511c1c03-feac-4bed-bab9-94d978971d6b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.698764789Z" level=info msg="Starting container: 8fe913529eb7b084d68ced6a0df479101eff8491d4fab6b9671e002d1b2aea6a" id=08cfabae-ebc7-4f37-861a-399bba87b063 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:52:25 addons-441523 crio[829]: time="2025-11-19T21:52:25.705796503Z" level=info msg="Started container" PID=5664 containerID=8fe913529eb7b084d68ced6a0df479101eff8491d4fab6b9671e002d1b2aea6a description=local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9/helper-pod id=08cfabae-ebc7-4f37-861a-399bba87b063 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbe2194ce8b2aafae2fbb38553d448005a915d176bb12ca8ab38f33021caa3a3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	8fe913529eb7b       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   dbe2194ce8b2a       helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9   local-path-storage
	3aab8bd879c85       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   83b3b008c0e93       test-local-path                                              default
	21d26def18c19       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   28c9ad15f89e5       helper-pod-create-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9   local-path-storage
	f844acc9a7b1a       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          8 seconds ago        Exited              registry-test                            0                   dbfceddba0b36       registry-test                                                default
	2154fca68aaca       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   add2bbae3efa8       busybox                                                      default
	ef10317742de6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 35 seconds ago       Running             gcp-auth                                 0                   54709cc8b1ee4       gcp-auth-78565c9fb4-sckk8                                    gcp-auth
	34574a37d491b       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             39 seconds ago       Running             controller                               0                   38d78b0a38947       ingress-nginx-controller-6c8bf45fb-rv9b4                     ingress-nginx
	96f30c790da8c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          58 seconds ago       Running             csi-snapshotter                          0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                                     kube-system
	9da6964451ec2       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             59 seconds ago       Exited              patch                                    2                   ab6343e6098ad       ingress-nginx-admission-patch-tc7l2                          ingress-nginx
	263912064df3e       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                                     kube-system
	73c4790ba1baf       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                                     kube-system
	f01ebeeec44c8       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                                     kube-system
	6f1fc06239abc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   e235fd1afb89d       gadget-d99sd                                                 gadget
	c4eac1059aec2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                                     kube-system
	2622c6f351152       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              patch                                    0                   77b82a5852260       gcp-auth-certs-patch-6sfbp                                   gcp-auth
	9b5b4ec60deae       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   9244debc7b674       metrics-server-85b7d694d7-sph2x                              kube-system
	30b145c697f23       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   04c39abccbef5       yakd-dashboard-5ff678cb9-c98f8                               yakd-dashboard
	edc8c67432b98       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   27dd06aab8cdf       snapshot-controller-7d9fbc56b8-p69nx                         kube-system
	f30d47c42b19c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   775abe9cde1b9       ingress-nginx-admission-create-4d2m6                         ingress-nginx
	8f25e4db79cca       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   727c9c34479cc       csi-hostpath-resizer-0                                       kube-system
	28bb9ca16548a       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   0657145fbd4a2       registry-6b586f9694-nmljk                                    kube-system
	0cf11b3427234       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   23d58878a712d       kube-ingress-dns-minikube                                    kube-system
	ce4788277f9a6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   89f6ba4ef53f1       registry-proxy-9279r                                         kube-system
	23061303ad569       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   5837757f52d8d       local-path-provisioner-648f6765c9-9z5cl                      local-path-storage
	6abf291cbc69c       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   f037a5d3ccc6b       cloud-spanner-emulator-6f9fcf858b-mk92d                      default
	46c0e17f82719       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   df9bcdbfdb3f6       csi-hostpathplugin-k94bt                                     kube-system
	820452bcc27f8       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   5916b052c08aa       nvidia-device-plugin-daemonset-7k2x9                         kube-system
	de9a0b0f37cb6       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   fffc59c5a0310       csi-hostpath-attacher-0                                      kube-system
	f8301b586f555       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   23197bfa36a96       snapshot-controller-7d9fbc56b8-tvq5m                         kube-system
	66d8b85866603       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   460c463796b65       coredns-66bc5c9577-dcqc5                                     kube-system
	55d6ec9aa9d53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   f58f66f298351       storage-provisioner                                          kube-system
	b69600b273a1e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   5ebebe0689ec6       kube-proxy-v4ctw                                             kube-system
	f0b1f859006b1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   cd3df3b4720c3       kindnet-kz24p                                                kube-system
	29fa20fcf4b84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   8bc9812b3b6f2       etcd-addons-441523                                           kube-system
	c8ee152b70c2c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   c7f68659f681b       kube-scheduler-addons-441523                                 kube-system
	d6958a88d2715       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   48101ac26f782       kube-apiserver-addons-441523                                 kube-system
	8e7ca5d3f3c7d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   a6dd97d74f14a       kube-controller-manager-addons-441523                        kube-system
	
	
	==> coredns [66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d] <==
	[INFO] 10.244.0.5:46511 - 57470 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00154773s
	[INFO] 10.244.0.5:46511 - 58694 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000131571s
	[INFO] 10.244.0.5:46511 - 40228 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000080518s
	[INFO] 10.244.0.5:35917 - 12832 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018118s
	[INFO] 10.244.0.5:35917 - 12594 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000197993s
	[INFO] 10.244.0.5:52613 - 40814 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085375s
	[INFO] 10.244.0.5:52613 - 40625 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000069047s
	[INFO] 10.244.0.5:45581 - 62665 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008421s
	[INFO] 10.244.0.5:45581 - 62460 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068612s
	[INFO] 10.244.0.5:44242 - 30871 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001666205s
	[INFO] 10.244.0.5:44242 - 30699 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001646579s
	[INFO] 10.244.0.5:45510 - 52359 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114266s
	[INFO] 10.244.0.5:45510 - 52212 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014538s
	[INFO] 10.244.0.21:54506 - 12885 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170603s
	[INFO] 10.244.0.21:57111 - 36597 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252992s
	[INFO] 10.244.0.21:48495 - 35035 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143198s
	[INFO] 10.244.0.21:52899 - 29726 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000311823s
	[INFO] 10.244.0.21:44520 - 63407 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165647s
	[INFO] 10.244.0.21:35058 - 24855 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152396s
	[INFO] 10.244.0.21:43106 - 22714 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002461177s
	[INFO] 10.244.0.21:57894 - 62563 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002123442s
	[INFO] 10.244.0.21:34480 - 15293 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001969528s
	[INFO] 10.244.0.21:38829 - 61195 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003362974s
	[INFO] 10.244.0.23:53274 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158517s
	[INFO] 10.244.0.23:57406 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147595s
	
	
	==> describe nodes <==
	Name:               addons-441523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-441523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=addons-441523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T21_49_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-441523
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-441523"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 21:49:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-441523
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 21:52:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 21:52:23 +0000   Wed, 19 Nov 2025 21:49:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 21:52:23 +0000   Wed, 19 Nov 2025 21:49:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 21:52:23 +0000   Wed, 19 Nov 2025 21:49:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 21:52:23 +0000   Wed, 19 Nov 2025 21:50:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-441523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                7e0289d3-2b72-41ab-9b05-c5cdea4768cd
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-6f9fcf858b-mk92d     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-d99sd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gcp-auth                    gcp-auth-78565c9fb4-sckk8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-rv9b4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m25s
	  kube-system                 coredns-66bc5c9577-dcqc5                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m31s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 csi-hostpathplugin-k94bt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 etcd-addons-441523                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m37s
	  kube-system                 kindnet-kz24p                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m32s
	  kube-system                 kube-apiserver-addons-441523                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-controller-manager-addons-441523       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-v4ctw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-scheduler-addons-441523                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 metrics-server-85b7d694d7-sph2x             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m27s
	  kube-system                 nvidia-device-plugin-daemonset-7k2x9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 registry-6b586f9694-nmljk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 registry-creds-764b6fb674-7msrk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 registry-proxy-9279r                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 snapshot-controller-7d9fbc56b8-p69nx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 snapshot-controller-7d9fbc56b8-tvq5m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  local-path-storage          local-path-provisioner-648f6765c9-9z5cl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-c98f8              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m30s                  kube-proxy       
	  Normal   Starting                 2m44s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node addons-441523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node addons-441523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m44s (x8 over 2m44s)  kubelet          Node addons-441523 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s                  kubelet          Node addons-441523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s                  kubelet          Node addons-441523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s                  kubelet          Node addons-441523 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m33s                  node-controller  Node addons-441523 event: Registered Node addons-441523 in Controller
	  Normal   NodeReady                110s                   kubelet          Node addons-441523 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov19 21:49] overlayfs: idmapped layers are currently not supported
	[  +0.079274] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40] <==
	{"level":"warn","ts":"2025-11-19T21:49:46.647597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.683378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.701152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.737243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.751707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.779804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.817791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.850010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.868007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.894198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.906496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.922795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.945217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.961446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:46.973266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.006602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.024296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.047393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:49:47.140534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:03.030990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:03.044982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.883158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.901294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.952161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:50:24.967117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55258","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ef10317742de6fb8a4ebf6e6ccbc71181b266fc7dc85d9c298129ebe3d52a1f9] <==
	2025/11/19 21:51:51 GCP Auth Webhook started!
	2025/11/19 21:51:54 Ready to marshal response ...
	2025/11/19 21:51:54 Ready to write response ...
	2025/11/19 21:51:54 Ready to marshal response ...
	2025/11/19 21:51:54 Ready to write response ...
	2025/11/19 21:51:54 Ready to marshal response ...
	2025/11/19 21:51:54 Ready to write response ...
	2025/11/19 21:52:15 Ready to marshal response ...
	2025/11/19 21:52:15 Ready to write response ...
	2025/11/19 21:52:17 Ready to marshal response ...
	2025/11/19 21:52:17 Ready to write response ...
	2025/11/19 21:52:17 Ready to marshal response ...
	2025/11/19 21:52:17 Ready to write response ...
	2025/11/19 21:52:25 Ready to marshal response ...
	2025/11/19 21:52:25 Ready to write response ...
	
	
	==> kernel <==
	 21:52:27 up  3:34,  0 user,  load average: 1.85, 1.56, 1.68
	Linux addons-441523 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97] <==
	I1119 21:50:28.752116       1 controller.go:711] "Syncing nftables rules"
	I1119 21:50:37.257078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:37.257132       1 main.go:301] handling current node
	I1119 21:50:47.250781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:47.250838       1 main.go:301] handling current node
	I1119 21:50:57.250330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:57.250360       1 main.go:301] handling current node
	I1119 21:51:07.250119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:07.250151       1 main.go:301] handling current node
	I1119 21:51:17.250815       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:17.250920       1 main.go:301] handling current node
	I1119 21:51:27.251041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:27.251184       1 main.go:301] handling current node
	I1119 21:51:37.250128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:37.250189       1 main.go:301] handling current node
	I1119 21:51:47.251146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:47.251179       1 main.go:301] handling current node
	I1119 21:51:57.251040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:57.251070       1 main.go:301] handling current node
	I1119 21:52:07.255462       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:52:07.255497       1 main.go:301] handling current node
	I1119 21:52:17.251187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:52:17.251222       1 main.go:301] handling current node
	I1119 21:52:27.250123       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:52:27.250167       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629] <==
	E1119 21:50:37.413476       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.10:443: connect: connection refused" logger="UnhandledError"
	W1119 21:50:37.414077       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.246.10:443: connect: connection refused
	E1119 21:50:37.414114       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.10:443: connect: connection refused" logger="UnhandledError"
	W1119 21:50:37.568147       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.246.10:443: connect: connection refused
	E1119 21:50:37.568192       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.246.10:443: connect: connection refused" logger="UnhandledError"
	W1119 21:51:02.505976       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:51:02.506021       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1119 21:51:02.506036       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1119 21:51:02.507190       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:51:02.507268       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1119 21:51:02.507279       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1119 21:51:29.052810       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:51:29.052881       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1119 21:51:29.053762       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.221.128:443: connect: connection refused" logger="UnhandledError"
	E1119 21:51:29.056830       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.221.128:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.221.128:443: connect: connection refused" logger="UnhandledError"
	I1119 21:51:29.171415       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 21:52:04.048103       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55982: use of closed network connection
	E1119 21:52:04.275809       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56008: use of closed network connection
	E1119 21:52:04.401119       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56026: use of closed network connection
	
	
	==> kube-controller-manager [8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487] <==
	I1119 21:49:54.881871       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 21:49:54.881890       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 21:49:54.897220       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:49:54.899399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 21:49:54.904546       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 21:49:54.914142       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 21:49:54.914284       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 21:49:54.914819       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 21:49:54.914839       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 21:49:54.915970       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 21:49:54.917898       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 21:49:54.918937       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1119 21:50:00.561283       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 21:50:24.874721       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:50:24.874911       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 21:50:24.875020       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 21:50:24.939845       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 21:50:24.944269       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 21:50:24.976020       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:50:25.045334       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 21:50:39.874716       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1119 21:50:54.983512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:50:55.074771       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1119 21:51:24.988816       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:51:25.084511       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea] <==
	I1119 21:49:57.071104       1 server_linux.go:53] "Using iptables proxy"
	I1119 21:49:57.176574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 21:49:57.277429       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 21:49:57.277457       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 21:49:57.277524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 21:49:57.354530       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 21:49:57.354578       1 server_linux.go:132] "Using iptables Proxier"
	I1119 21:49:57.366379       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 21:49:57.366675       1 server.go:527] "Version info" version="v1.34.1"
	I1119 21:49:57.366698       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:49:57.368072       1 config.go:200] "Starting service config controller"
	I1119 21:49:57.368090       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 21:49:57.368116       1 config.go:106] "Starting endpoint slice config controller"
	I1119 21:49:57.368120       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 21:49:57.368139       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 21:49:57.368143       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 21:49:57.368746       1 config.go:309] "Starting node config controller"
	I1119 21:49:57.368759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 21:49:57.368764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 21:49:57.468571       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 21:49:57.468644       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 21:49:57.468286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3] <==
	E1119 21:49:47.916692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:49:47.917626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 21:49:47.919063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 21:49:47.920119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 21:49:47.920398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:49:47.920511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:49:47.920601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 21:49:47.920689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:49:47.920798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 21:49:47.920888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:49:47.921025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:49:47.921144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:49:48.722387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 21:49:48.728010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 21:49:48.741854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:49:48.760009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:49:48.815158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:49:48.842419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 21:49:48.879659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:49:48.994025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:49:49.040861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:49:49.053452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:49:49.115738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:49:49.380225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1119 21:49:52.316176       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 21:52:24 addons-441523 kubelet[1279]: I1119 21:52:24.550430    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5041a0b0-d5ed-4ee8-bb83-d82535332819-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9" (OuterVolumeSpecName: "data") pod "5041a0b0-d5ed-4ee8-bb83-d82535332819" (UID: "5041a0b0-d5ed-4ee8-bb83-d82535332819"). InnerVolumeSpecName "pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 19 21:52:24 addons-441523 kubelet[1279]: I1119 21:52:24.556798    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5041a0b0-d5ed-4ee8-bb83-d82535332819-kube-api-access-blxwj" (OuterVolumeSpecName: "kube-api-access-blxwj") pod "5041a0b0-d5ed-4ee8-bb83-d82535332819" (UID: "5041a0b0-d5ed-4ee8-bb83-d82535332819"). InnerVolumeSpecName "kube-api-access-blxwj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 19 21:52:24 addons-441523 kubelet[1279]: I1119 21:52:24.650532    1279 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5041a0b0-d5ed-4ee8-bb83-d82535332819-gcp-creds\") on node \"addons-441523\" DevicePath \"\""
	Nov 19 21:52:24 addons-441523 kubelet[1279]: I1119 21:52:24.650767    1279 reconciler_common.go:299] "Volume detached for volume \"pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9\" (UniqueName: \"kubernetes.io/host-path/5041a0b0-d5ed-4ee8-bb83-d82535332819-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9\") on node \"addons-441523\" DevicePath \"\""
	Nov 19 21:52:24 addons-441523 kubelet[1279]: I1119 21:52:24.650797    1279 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-blxwj\" (UniqueName: \"kubernetes.io/projected/5041a0b0-d5ed-4ee8-bb83-d82535332819-kube-api-access-blxwj\") on node \"addons-441523\" DevicePath \"\""
	Nov 19 21:52:25 addons-441523 kubelet[1279]: I1119 21:52:25.377095    1279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b3b008c0e9327720e22da354dfc24d4481a0a2d1d3e0e99d22bcc090a988de"
	Nov 19 21:52:25 addons-441523 kubelet[1279]: E1119 21:52:25.379181    1279 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-441523\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-441523' and this object" podUID="5041a0b0-d5ed-4ee8-bb83-d82535332819" pod="default/test-local-path"
	Nov 19 21:52:25 addons-441523 kubelet[1279]: I1119 21:52:25.460348    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-script\") pod \"helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") " pod="local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9"
	Nov 19 21:52:25 addons-441523 kubelet[1279]: I1119 21:52:25.460398    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-data\") pod \"helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") " pod="local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9"
	Nov 19 21:52:25 addons-441523 kubelet[1279]: I1119 21:52:25.460436    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzqdr\" (UniqueName: \"kubernetes.io/projected/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-kube-api-access-jzqdr\") pod \"helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") " pod="local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9"
	Nov 19 21:52:25 addons-441523 kubelet[1279]: I1119 21:52:25.460467    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-gcp-creds\") pod \"helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") " pod="local-path-storage/helper-pod-delete-pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9"
	Nov 19 21:52:26 addons-441523 kubelet[1279]: E1119 21:52:26.412167    1279 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-441523\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-441523' and this object" podUID="5041a0b0-d5ed-4ee8-bb83-d82535332819" pod="default/test-local-path"
	Nov 19 21:52:26 addons-441523 kubelet[1279]: I1119 21:52:26.418478    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5041a0b0-d5ed-4ee8-bb83-d82535332819" path="/var/lib/kubelet/pods/5041a0b0-d5ed-4ee8-bb83-d82535332819/volumes"
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.587134    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-gcp-creds\") pod \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") "
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.587201    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzqdr\" (UniqueName: \"kubernetes.io/projected/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-kube-api-access-jzqdr\") pod \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") "
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.587236    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-script\") pod \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") "
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.587277    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-data\") pod \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\" (UID: \"b4932884-f5a1-49ee-9f1e-99cd57a65a2d\") "
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.587442    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-data" (OuterVolumeSpecName: "data") pod "b4932884-f5a1-49ee-9f1e-99cd57a65a2d" (UID: "b4932884-f5a1-49ee-9f1e-99cd57a65a2d"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.587473    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b4932884-f5a1-49ee-9f1e-99cd57a65a2d" (UID: "b4932884-f5a1-49ee-9f1e-99cd57a65a2d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.588320    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-script" (OuterVolumeSpecName: "script") pod "b4932884-f5a1-49ee-9f1e-99cd57a65a2d" (UID: "b4932884-f5a1-49ee-9f1e-99cd57a65a2d"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.590240    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-kube-api-access-jzqdr" (OuterVolumeSpecName: "kube-api-access-jzqdr") pod "b4932884-f5a1-49ee-9f1e-99cd57a65a2d" (UID: "b4932884-f5a1-49ee-9f1e-99cd57a65a2d"). InnerVolumeSpecName "kube-api-access-jzqdr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.687892    1279 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-data\") on node \"addons-441523\" DevicePath \"\""
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.687940    1279 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-gcp-creds\") on node \"addons-441523\" DevicePath \"\""
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.687954    1279 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jzqdr\" (UniqueName: \"kubernetes.io/projected/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-kube-api-access-jzqdr\") on node \"addons-441523\" DevicePath \"\""
	Nov 19 21:52:27 addons-441523 kubelet[1279]: I1119 21:52:27.687972    1279 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b4932884-f5a1-49ee-9f1e-99cd57a65a2d-script\") on node \"addons-441523\" DevicePath \"\""
	
	
	==> storage-provisioner [55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d] <==
	W1119 21:52:03.068321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:05.072128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:05.078952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:07.082407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:07.086942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:09.090565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:09.097414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:11.105697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:11.117191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:13.120306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:13.124877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:15.128069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:15.135328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:17.138059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:17.145028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:19.148407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:19.156624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:21.160119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:21.165008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:23.168752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:23.176025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:25.179979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:25.187326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:27.192903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:27.198801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-441523 -n addons-441523
helpers_test.go:269: (dbg) Run:  kubectl --context addons-441523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2 registry-creds-764b6fb674-7msrk
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-441523 describe pod ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2 registry-creds-764b6fb674-7msrk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-441523 describe pod ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2 registry-creds-764b6fb674-7msrk: exit status 1 (90.749989ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4d2m6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tc7l2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7msrk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-441523 describe pod ingress-nginx-admission-create-4d2m6 ingress-nginx-admission-patch-tc7l2 registry-creds-764b6fb674-7msrk: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable headlamp --alsologtostderr -v=1: exit status 11 (269.369083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:28.993269  870474 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:28.994057  870474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:28.994120  870474 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:28.994135  870474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:28.994465  870474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:28.994837  870474 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:28.995311  870474 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:28.995331  870474 addons.go:607] checking whether the cluster is paused
	I1119 21:52:28.995478  870474 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:28.995497  870474 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:28.996267  870474 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:29.022815  870474 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:29.022922  870474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:29.040726  870474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:29.141500  870474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:29.141608  870474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:29.171314  870474 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:29.171340  870474 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:29.171345  870474 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:29.171357  870474 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:29.171361  870474 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:29.171364  870474 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:29.171373  870474 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:29.171377  870474 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:29.171380  870474 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:29.171386  870474 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:29.171390  870474 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:29.171393  870474 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:29.171397  870474 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:29.171400  870474 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:29.171403  870474 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:29.171408  870474 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:29.171414  870474 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:29.171425  870474 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:29.171428  870474 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:29.171432  870474 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:29.171436  870474 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:29.171443  870474 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:29.171446  870474 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:29.171449  870474 cri.go:89] found id: ""
	I1119 21:52:29.171504  870474 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:29.186508  870474 out.go:203] 
	W1119 21:52:29.189420  870474 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:29.189448  870474 out.go:285] * 
	* 
	W1119 21:52:29.196108  870474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:29.199336  870474 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-mk92d" [5a6ae8e3-dc81-46b1-b74e-b84b57eddcff] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004444931s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (447.49741ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:26.126483  869972 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:26.133424  869972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:26.133442  869972 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:26.133448  869972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:26.133737  869972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:26.134044  869972 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:26.134405  869972 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:26.134415  869972 addons.go:607] checking whether the cluster is paused
	I1119 21:52:26.134519  869972 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:26.134530  869972 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:26.144489  869972 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:26.201060  869972 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:26.201123  869972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:26.231639  869972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:26.333811  869972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:26.333893  869972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:26.369008  869972 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:26.369033  869972 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:26.369039  869972 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:26.369043  869972 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:26.369047  869972 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:26.369051  869972 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:26.369054  869972 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:26.369057  869972 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:26.369061  869972 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:26.369067  869972 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:26.369071  869972 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:26.369075  869972 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:26.369083  869972 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:26.369086  869972 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:26.369090  869972 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:26.369099  869972 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:26.369106  869972 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:26.369111  869972 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:26.369115  869972 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:26.369118  869972 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:26.369123  869972 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:26.369126  869972 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:26.369129  869972 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:26.369132  869972 cri.go:89] found id: ""
	I1119 21:52:26.369182  869972 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:26.405050  869972 out.go:203] 
	W1119 21:52:26.412849  869972 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:26.412883  869972 out.go:285] * 
	* 
	W1119 21:52:26.431575  869972 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:26.436004  869972 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-441523 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-441523 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/11/19 21:52:20 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-441523 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5041a0b0-d5ed-4ee8-bb83-d82535332819] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5041a0b0-d5ed-4ee8-bb83-d82535332819] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5041a0b0-d5ed-4ee8-bb83-d82535332819] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003739906s
addons_test.go:967: (dbg) Run:  kubectl --context addons-441523 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 ssh "cat /opt/local-path-provisioner/pvc-1a0187e4-62c0-460f-aaf6-506a7bb12cb9_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-441523 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-441523 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (302.225247ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:25.382550  869833 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:25.383214  869833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:25.383236  869833 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:25.383242  869833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:25.383639  869833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:25.383984  869833 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:25.384629  869833 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:25.384642  869833 addons.go:607] checking whether the cluster is paused
	I1119 21:52:25.385073  869833 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:25.385097  869833 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:25.385557  869833 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:25.405686  869833 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:25.405748  869833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:25.423346  869833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:25.537535  869833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:25.537625  869833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:25.595393  869833 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:25.595417  869833 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:25.595422  869833 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:25.595427  869833 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:25.595432  869833 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:25.595436  869833 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:25.595439  869833 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:25.595442  869833 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:25.595452  869833 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:25.595459  869833 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:25.595462  869833 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:25.595471  869833 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:25.595475  869833 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:25.595478  869833 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:25.595481  869833 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:25.595486  869833 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:25.595490  869833 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:25.595494  869833 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:25.595498  869833 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:25.595505  869833 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:25.595510  869833 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:25.595513  869833 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:25.595516  869833 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:25.595519  869833 cri.go:89] found id: ""
	I1119 21:52:25.595567  869833 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:25.616452  869833 out.go:203] 
	W1119 21:52:25.619478  869833 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:25.619508  869833 out.go:285] * 
	* 
	W1119 21:52:25.627270  869833 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:25.630517  869833 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7k2x9" [01c81149-8e63-48d6-b47d-54cf20b36ac8] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003637716s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (263.862785ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:17.027410  869394 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:17.028048  869394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:17.028093  869394 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:17.028116  869394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:17.028425  869394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:17.028746  869394 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:17.029159  869394 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:17.029205  869394 addons.go:607] checking whether the cluster is paused
	I1119 21:52:17.029338  869394 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:17.029374  869394 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:17.029843  869394 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:17.047749  869394 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:17.047813  869394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:17.067692  869394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:17.165548  869394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:17.165645  869394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:17.194775  869394 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:17.194801  869394 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:17.194806  869394 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:17.194809  869394 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:17.194813  869394 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:17.194816  869394 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:17.194819  869394 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:17.194823  869394 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:17.194826  869394 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:17.194833  869394 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:17.194836  869394 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:17.194839  869394 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:17.194842  869394 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:17.194845  869394 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:17.194848  869394 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:17.194856  869394 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:17.194859  869394 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:17.194901  869394 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:17.194906  869394 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:17.194910  869394 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:17.194916  869394 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:17.194919  869394 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:17.194922  869394 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:17.194925  869394 cri.go:89] found id: ""
	I1119 21:52:17.194976  869394 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:17.210513  869394 out.go:203] 
	W1119 21:52:17.213456  869394 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:17.213482  869394 out.go:285] * 
	* 
	W1119 21:52:17.219856  869394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:17.223045  869394 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-c98f8" [db046cd0-c67a-4de6-9cf6-b60b414b2ce0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004092753s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-441523 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-441523 addons disable yakd --alsologtostderr -v=1: exit status 11 (280.745182ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:10.734257  869282 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:10.734997  869282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:10.735040  869282 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:10.735066  869282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:10.735359  869282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:52:10.735680  869282 mustload.go:66] Loading cluster: addons-441523
	I1119 21:52:10.736128  869282 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:10.736178  869282 addons.go:607] checking whether the cluster is paused
	I1119 21:52:10.736308  869282 config.go:182] Loaded profile config "addons-441523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:10.736346  869282 host.go:66] Checking if "addons-441523" exists ...
	I1119 21:52:10.736865  869282 cli_runner.go:164] Run: docker container inspect addons-441523 --format={{.State.Status}}
	I1119 21:52:10.754443  869282 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:10.754498  869282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-441523
	I1119 21:52:10.773024  869282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/addons-441523/id_rsa Username:docker}
	I1119 21:52:10.873523  869282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:10.873670  869282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:10.912507  869282 cri.go:89] found id: "96f30c790da8cf5d8d6dc12a46a24b8fb246c51e1bdf4a419d0fc95d80752861"
	I1119 21:52:10.912554  869282 cri.go:89] found id: "263912064df3e239c8730b2152652c3ea930878fcce5cb1816b2df0a0fb93822"
	I1119 21:52:10.912560  869282 cri.go:89] found id: "73c4790ba1baf0c5b92e9f9c87a5c91194c86cb71c498e76da0f832f20e66fbb"
	I1119 21:52:10.912564  869282 cri.go:89] found id: "f01ebeeec44c88b2d75d931760fff7eb2761900ff31f5c9617ceb36f57ed6d01"
	I1119 21:52:10.912568  869282 cri.go:89] found id: "c4eac1059aec2135d2fd0e324981e03f7aeaf3b360d2d58edf11555cad278c5f"
	I1119 21:52:10.912573  869282 cri.go:89] found id: "9b5b4ec60deaef37df28c63af05df57fa47230e15523c2bb3bf1de9d1aa248a7"
	I1119 21:52:10.912577  869282 cri.go:89] found id: "edc8c67432b984334991d29e3829549802f72294e506c8c685008c6461b83aba"
	I1119 21:52:10.912580  869282 cri.go:89] found id: "8f25e4db79ccafdac3039c48ac81d28d9a1bbc105daad58b7c2f83989067927a"
	I1119 21:52:10.912584  869282 cri.go:89] found id: "28bb9ca16548a71c4d6cc758ed5f62305f35cea4799bfb1dd23784d4495a9d3d"
	I1119 21:52:10.912595  869282 cri.go:89] found id: "0cf11b34272344720c697ca9ff323f950846938e456dea0f0bd7193df5f08f10"
	I1119 21:52:10.912599  869282 cri.go:89] found id: "ce4788277f9a68974420803db8ff9cee366a9749ea09ebf3f9362f7c950b21cb"
	I1119 21:52:10.912602  869282 cri.go:89] found id: "46c0e17f82719830a1c4c08ad54c9b26f998f792fa967f7112b0d77f2c1b3081"
	I1119 21:52:10.912611  869282 cri.go:89] found id: "820452bcc27f885bfd3d19cccd2a048082e0440074569bb8dec4c45abcd5e5d9"
	I1119 21:52:10.912615  869282 cri.go:89] found id: "de9a0b0f37cb634901faf1ae29031e19925a1e80ae7fd0fc44f6aaec785e47a7"
	I1119 21:52:10.912619  869282 cri.go:89] found id: "f8301b586f5550686bab98df95394340856f817619b7e4667595b1acdb2bf5e1"
	I1119 21:52:10.912628  869282 cri.go:89] found id: "66d8b85866603b25d6936d742cbed65124365745d32c112f6080c1927443b23d"
	I1119 21:52:10.912636  869282 cri.go:89] found id: "55d6ec9aa9d53ce1afbae5b0fa9beb27ed2714a2e5a29dd29bd15ae4a7bd9b3d"
	I1119 21:52:10.912641  869282 cri.go:89] found id: "b69600b273a1e2e4f376ac258e2e3a989fc108606d951a13b1bad2d760a25eea"
	I1119 21:52:10.912644  869282 cri.go:89] found id: "f0b1f859006b1d557965157e5e6b78dc112413e627825d2fc105f14e22352c97"
	I1119 21:52:10.912647  869282 cri.go:89] found id: "29fa20fcf4b8487126e49f0d02dfae3c287bedf241dbd5c8c43fefced61dde40"
	I1119 21:52:10.912652  869282 cri.go:89] found id: "c8ee152b70c2cedc18956007862bef70163ebd092dfcd4bf12987b6aab3ad0b3"
	I1119 21:52:10.912655  869282 cri.go:89] found id: "d6958a88d2715c055878f18a86feacaf027bb947dae601874e64301fc8d56629"
	I1119 21:52:10.912658  869282 cri.go:89] found id: "8e7ca5d3f3c7d469b6bb55c7680c9d5e1d0df1909dbef6aff3aae91823fe4487"
	I1119 21:52:10.912661  869282 cri.go:89] found id: ""
	I1119 21:52:10.912719  869282 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:10.938497  869282 out.go:203] 
	W1119 21:52:10.941525  869282 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:10.941615  869282 out.go:285] * 
	* 
	W1119 21:52:10.948964  869282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:10.951975  869282 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-441523 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (604.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-642533 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-642533 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-sxlzs" [e178097e-0efa-4f2d-9d4c-70558b0dd158] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-642533 -n functional-642533
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-19 22:13:50.853207164 +0000 UTC m=+1573.760530960
functional_test.go:1645: (dbg) Run:  kubectl --context functional-642533 describe po hello-node-connect-7d85dfc575-sxlzs -n default
functional_test.go:1645: (dbg) kubectl --context functional-642533 describe po hello-node-connect-7d85dfc575-sxlzs -n default:
Name:             hello-node-connect-7d85dfc575-sxlzs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-642533/192.168.49.2
Start Time:       Wed, 19 Nov 2025 22:03:50 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gb5gj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gb5gj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-sxlzs to functional-642533
Normal   Pulling    7m11s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-642533 logs hello-node-connect-7d85dfc575-sxlzs -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-642533 logs hello-node-connect-7d85dfc575-sxlzs -n default: exit status 1 (113.60235ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-sxlzs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-642533 logs hello-node-connect-7d85dfc575-sxlzs -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-642533 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-sxlzs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-642533/192.168.49.2
Start Time:       Wed, 19 Nov 2025 22:03:50 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gb5gj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gb5gj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-sxlzs to functional-642533
Normal   Pulling    7m12s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-642533 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-642533 logs -l app=hello-node-connect: exit status 1 (85.315698ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-sxlzs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-642533 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-642533 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.57.31
IPs:                      10.106.57.31
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30229/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-642533
helpers_test.go:243: (dbg) docker inspect functional-642533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b51fe19ad13c1f6b09416e9cb0ead366a33dd9dbf0f69c112a56b742c5ec7970",
	        "Created": "2025-11-19T21:56:24.64995525Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 878039,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T21:56:24.712529521Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/b51fe19ad13c1f6b09416e9cb0ead366a33dd9dbf0f69c112a56b742c5ec7970/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b51fe19ad13c1f6b09416e9cb0ead366a33dd9dbf0f69c112a56b742c5ec7970/hostname",
	        "HostsPath": "/var/lib/docker/containers/b51fe19ad13c1f6b09416e9cb0ead366a33dd9dbf0f69c112a56b742c5ec7970/hosts",
	        "LogPath": "/var/lib/docker/containers/b51fe19ad13c1f6b09416e9cb0ead366a33dd9dbf0f69c112a56b742c5ec7970/b51fe19ad13c1f6b09416e9cb0ead366a33dd9dbf0f69c112a56b742c5ec7970-json.log",
	        "Name": "/functional-642533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-642533:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-642533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b51fe19ad13c1f6b09416e9cb0ead366a33dd9dbf0f69c112a56b742c5ec7970",
	                "LowerDir": "/var/lib/docker/overlay2/0387682ab409d4c1adda89714a099997684ff37be3fc6a61534f8cbd18563355-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0387682ab409d4c1adda89714a099997684ff37be3fc6a61534f8cbd18563355/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0387682ab409d4c1adda89714a099997684ff37be3fc6a61534f8cbd18563355/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0387682ab409d4c1adda89714a099997684ff37be3fc6a61534f8cbd18563355/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-642533",
	                "Source": "/var/lib/docker/volumes/functional-642533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-642533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-642533",
	                "name.minikube.sigs.k8s.io": "functional-642533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6722b77c3c8e525660beab71480aa6b28cad39ea87bfc034554f5b39fbbb33a1",
	            "SandboxKey": "/var/run/docker/netns/6722b77c3c8e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33571"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33572"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33575"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33573"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33574"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-642533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:d3:de:9e:b6:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "351c2836bed1927d167e945c541de03fc867251448682ac3e7d56071f4b14e59",
	                    "EndpointID": "78654ed231105eefb63c76a19c763e771867802fbd29a20cd6dbc6a2ca230269",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-642533",
	                        "b51fe19ad13c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-642533 -n functional-642533
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 logs -n 25: (1.530405372s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p addons-441523                                                                                                        │ addons-441523     │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ start   │ -p nospam-909089 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-909089 --driver=docker  --container-runtime=crio │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:56 UTC │
	│ start   │ nospam-909089 --log_dir /tmp/nospam-909089 start --dry-run                                                              │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ start   │ nospam-909089 --log_dir /tmp/nospam-909089 start --dry-run                                                              │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ start   │ nospam-909089 --log_dir /tmp/nospam-909089 start --dry-run                                                              │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ pause   │ nospam-909089 --log_dir /tmp/nospam-909089 pause                                                                        │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ pause   │ nospam-909089 --log_dir /tmp/nospam-909089 pause                                                                        │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ pause   │ nospam-909089 --log_dir /tmp/nospam-909089 pause                                                                        │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ unpause │ nospam-909089 --log_dir /tmp/nospam-909089 unpause                                                                      │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ unpause │ nospam-909089 --log_dir /tmp/nospam-909089 unpause                                                                      │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ unpause │ nospam-909089 --log_dir /tmp/nospam-909089 unpause                                                                      │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │                     │
	│ stop    │ nospam-909089 --log_dir /tmp/nospam-909089 stop                                                                         │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │ 19 Nov 25 21:56 UTC │
	│ stop    │ nospam-909089 --log_dir /tmp/nospam-909089 stop                                                                         │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │ 19 Nov 25 21:56 UTC │
	│ stop    │ nospam-909089 --log_dir /tmp/nospam-909089 stop                                                                         │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │ 19 Nov 25 21:56 UTC │
	│ delete  │ -p nospam-909089                                                                                                        │ nospam-909089     │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │ 19 Nov 25 21:56 UTC │
	│ start   │ -p functional-642533 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio           │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 21:56 UTC │ 19 Nov 25 21:57 UTC │
	│ start   │ -p functional-642533 --alsologtostderr -v=8                                                                             │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 21:57 UTC │ 19 Nov 25 21:58 UTC │
	│ cache   │ functional-642533 cache add registry.k8s.io/pause:3.1                                                                   │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 21:58 UTC │ 19 Nov 25 21:58 UTC │
	│ config  │ functional-642533 config get cpus                                                                                       │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 22:03 UTC │                     │
	│ ssh     │ functional-642533 ssh cat /etc/hostname                                                                                 │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 22:03 UTC │ 19 Nov 25 22:03 UTC │
	│ tunnel  │ functional-642533 tunnel --alsologtostderr                                                                              │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 22:03 UTC │                     │
	│ tunnel  │ functional-642533 tunnel --alsologtostderr                                                                              │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 22:03 UTC │                     │
	│ tunnel  │ functional-642533 tunnel --alsologtostderr                                                                              │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 22:03 UTC │                     │
	│ addons  │ functional-642533 addons list                                                                                           │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 22:03 UTC │ 19 Nov 25 22:03 UTC │
	│ addons  │ functional-642533 addons list -o json                                                                                   │ functional-642533 │ jenkins │ v1.37.0 │ 19 Nov 25 22:03 UTC │ 19 Nov 25 22:03 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:58:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:58:16.746959  882202 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:58:16.747106  882202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:58:16.747111  882202 out.go:374] Setting ErrFile to fd 2...
	I1119 21:58:16.747114  882202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:58:16.747361  882202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:58:16.747702  882202 out.go:368] Setting JSON to false
	I1119 21:58:16.748586  882202 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13226,"bootTime":1763576271,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 21:58:16.748641  882202 start.go:143] virtualization:  
	I1119 21:58:16.752182  882202 out.go:179] * [functional-642533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 21:58:16.756035  882202 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:58:16.756110  882202 notify.go:221] Checking for updates...
	I1119 21:58:16.761903  882202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:58:16.764712  882202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 21:58:16.767622  882202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 21:58:16.770446  882202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 21:58:16.773231  882202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:58:16.776659  882202 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:58:16.776751  882202 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:58:16.804685  882202 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:58:16.804790  882202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:58:16.871994  882202 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-19 21:58:16.862697776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:58:16.872115  882202 docker.go:319] overlay module found
	I1119 21:58:16.875375  882202 out.go:179] * Using the docker driver based on existing profile
	I1119 21:58:16.878289  882202 start.go:309] selected driver: docker
	I1119 21:58:16.878322  882202 start.go:930] validating driver "docker" against &{Name:functional-642533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-642533 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:58:16.878405  882202 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:58:16.878520  882202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:58:16.937178  882202 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-19 21:58:16.927012004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:58:16.937565  882202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:58:16.937590  882202 cni.go:84] Creating CNI manager for ""
	I1119 21:58:16.937644  882202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:58:16.937682  882202 start.go:353] cluster config:
	{Name:functional-642533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-642533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:58:16.942811  882202 out.go:179] * Starting "functional-642533" primary control-plane node in "functional-642533" cluster
	I1119 21:58:16.945567  882202 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:58:16.948437  882202 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:58:16.951248  882202 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:58:16.951290  882202 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 21:58:16.951297  882202 cache.go:65] Caching tarball of preloaded images
	I1119 21:58:16.951323  882202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:58:16.951386  882202 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 21:58:16.951395  882202 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:58:16.951505  882202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/config.json ...
	I1119 21:58:16.974557  882202 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 21:58:16.974569  882202 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 21:58:16.974588  882202 cache.go:243] Successfully downloaded all kic artifacts
	I1119 21:58:16.974611  882202 start.go:360] acquireMachinesLock for functional-642533: {Name:mk16159ab101ca740661ab9c63214a28e2aa4f27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:58:16.974676  882202 start.go:364] duration metric: took 49.338µs to acquireMachinesLock for "functional-642533"
	I1119 21:58:16.974703  882202 start.go:96] Skipping create...Using existing machine configuration
	I1119 21:58:16.974708  882202 fix.go:54] fixHost starting: 
	I1119 21:58:16.975000  882202 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
	I1119 21:58:16.992540  882202 fix.go:112] recreateIfNeeded on functional-642533: state=Running err=<nil>
	W1119 21:58:16.992578  882202 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 21:58:16.995784  882202 out.go:252] * Updating the running docker "functional-642533" container ...
	I1119 21:58:16.995810  882202 machine.go:94] provisionDockerMachine start ...
	I1119 21:58:16.995896  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:17.014849  882202 main.go:143] libmachine: Using SSH client type: native
	I1119 21:58:17.015209  882202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33571 <nil> <nil>}
	I1119 21:58:17.015222  882202 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:58:17.158574  882202 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-642533
	
	I1119 21:58:17.158589  882202 ubuntu.go:182] provisioning hostname "functional-642533"
	I1119 21:58:17.158655  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:17.176763  882202 main.go:143] libmachine: Using SSH client type: native
	I1119 21:58:17.177102  882202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33571 <nil> <nil>}
	I1119 21:58:17.177112  882202 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-642533 && echo "functional-642533" | sudo tee /etc/hostname
	I1119 21:58:17.328672  882202 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-642533
	
	I1119 21:58:17.328741  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:17.348473  882202 main.go:143] libmachine: Using SSH client type: native
	I1119 21:58:17.348779  882202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33571 <nil> <nil>}
	I1119 21:58:17.348794  882202 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-642533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-642533/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-642533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:58:17.491346  882202 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:58:17.491362  882202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 21:58:17.491383  882202 ubuntu.go:190] setting up certificates
	I1119 21:58:17.491392  882202 provision.go:84] configureAuth start
	I1119 21:58:17.491451  882202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-642533
	I1119 21:58:17.509344  882202 provision.go:143] copyHostCerts
	I1119 21:58:17.509401  882202 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 21:58:17.509422  882202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 21:58:17.509523  882202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 21:58:17.509665  882202 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 21:58:17.509669  882202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 21:58:17.509694  882202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 21:58:17.509747  882202 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 21:58:17.509750  882202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 21:58:17.509772  882202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 21:58:17.509814  882202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.functional-642533 san=[127.0.0.1 192.168.49.2 functional-642533 localhost minikube]
	I1119 21:58:18.323974  882202 provision.go:177] copyRemoteCerts
	I1119 21:58:18.324029  882202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:58:18.324088  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:18.353020  882202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 21:58:18.455186  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 21:58:18.474464  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 21:58:18.493337  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 21:58:18.512658  882202 provision.go:87] duration metric: took 1.021253685s to configureAuth
	I1119 21:58:18.512675  882202 ubuntu.go:206] setting minikube options for container-runtime
	I1119 21:58:18.512868  882202 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:58:18.512972  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:18.534742  882202 main.go:143] libmachine: Using SSH client type: native
	I1119 21:58:18.535082  882202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33571 <nil> <nil>}
	I1119 21:58:18.535098  882202 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:58:23.940504  882202 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:58:23.940516  882202 machine.go:97] duration metric: took 6.944700108s to provisionDockerMachine
	I1119 21:58:23.940526  882202 start.go:293] postStartSetup for "functional-642533" (driver="docker")
	I1119 21:58:23.940535  882202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:58:23.940615  882202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:58:23.940652  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:23.958627  882202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 21:58:24.075056  882202 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:58:24.078581  882202 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 21:58:24.078599  882202 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 21:58:24.078609  882202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 21:58:24.078668  882202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 21:58:24.078742  882202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 21:58:24.078817  882202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/test/nested/copy/862175/hosts -> hosts in /etc/test/nested/copy/862175
	I1119 21:58:24.078861  882202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/862175
	I1119 21:58:24.086756  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 21:58:24.105033  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/test/nested/copy/862175/hosts --> /etc/test/nested/copy/862175/hosts (40 bytes)
	I1119 21:58:24.122418  882202 start.go:296] duration metric: took 181.877433ms for postStartSetup
	I1119 21:58:24.122504  882202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:58:24.122554  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:24.139334  882202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 21:58:24.236808  882202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 21:58:24.242121  882202 fix.go:56] duration metric: took 7.267405189s for fixHost
	I1119 21:58:24.242136  882202 start.go:83] releasing machines lock for "functional-642533", held for 7.267452204s
	I1119 21:58:24.242213  882202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-642533
	I1119 21:58:24.259162  882202 ssh_runner.go:195] Run: cat /version.json
	I1119 21:58:24.259204  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:24.259223  882202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:58:24.259295  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 21:58:24.277235  882202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 21:58:24.278600  882202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 21:58:24.463289  882202 ssh_runner.go:195] Run: systemctl --version
	I1119 21:58:24.470283  882202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:58:24.509092  882202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 21:58:24.513641  882202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:58:24.513704  882202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:58:24.522260  882202 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 21:58:24.522275  882202 start.go:496] detecting cgroup driver to use...
	I1119 21:58:24.522304  882202 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 21:58:24.522347  882202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:58:24.538255  882202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:58:24.552807  882202 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:58:24.552872  882202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:58:24.569852  882202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:58:24.583601  882202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:58:24.729401  882202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:58:24.861265  882202 docker.go:234] disabling docker service ...
	I1119 21:58:24.861353  882202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:58:24.876709  882202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:58:24.889977  882202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:58:25.032246  882202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:58:25.174754  882202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:58:25.187846  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:58:25.201414  882202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:58:25.201491  882202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:58:25.210053  882202 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 21:58:25.210124  882202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:58:25.218985  882202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:58:25.229002  882202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:58:25.237529  882202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:58:25.245118  882202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:58:25.253581  882202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:58:25.261749  882202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:58:25.271165  882202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:58:25.278667  882202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:58:25.286163  882202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:58:25.432245  882202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:58:31.122403  882202 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.690132891s)
	I1119 21:58:31.122422  882202 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:58:31.122491  882202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:58:31.126977  882202 start.go:564] Will wait 60s for crictl version
	I1119 21:58:31.127040  882202 ssh_runner.go:195] Run: which crictl
	I1119 21:58:31.130770  882202 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 21:58:31.158703  882202 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 21:58:31.158773  882202 ssh_runner.go:195] Run: crio --version
	I1119 21:58:31.188311  882202 ssh_runner.go:195] Run: crio --version
	I1119 21:58:31.220094  882202 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 21:58:31.223123  882202 cli_runner.go:164] Run: docker network inspect functional-642533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:58:31.239092  882202 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 21:58:31.246195  882202 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1119 21:58:31.249022  882202 kubeadm.go:884] updating cluster {Name:functional-642533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-642533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:58:31.249144  882202 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:58:31.249214  882202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:58:31.287033  882202 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:58:31.287043  882202 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:58:31.287093  882202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:58:31.311815  882202 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:58:31.311826  882202 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:58:31.311832  882202 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1119 21:58:31.311925  882202 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-642533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-642533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:58:31.312005  882202 ssh_runner.go:195] Run: crio config
	I1119 21:58:31.387473  882202 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1119 21:58:31.387493  882202 cni.go:84] Creating CNI manager for ""
	I1119 21:58:31.387501  882202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:58:31.387515  882202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:58:31.387542  882202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-642533 NodeName:functional-642533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:58:31.387660  882202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-642533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:58:31.387728  882202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:58:31.395389  882202 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:58:31.395445  882202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:58:31.402701  882202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 21:58:31.414587  882202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:58:31.426800  882202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1119 21:58:31.440099  882202 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 21:58:31.443818  882202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:58:31.586470  882202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:58:31.599223  882202 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533 for IP: 192.168.49.2
	I1119 21:58:31.599233  882202 certs.go:195] generating shared ca certs ...
	I1119 21:58:31.599246  882202 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:58:31.599366  882202 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 21:58:31.599408  882202 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 21:58:31.599424  882202 certs.go:257] generating profile certs ...
	I1119 21:58:31.599513  882202 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.key
	I1119 21:58:31.599557  882202 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/apiserver.key.01f89b07
	I1119 21:58:31.599595  882202 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/proxy-client.key
	I1119 21:58:31.599699  882202 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 21:58:31.599737  882202 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 21:58:31.599743  882202 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 21:58:31.599770  882202 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 21:58:31.599791  882202 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:58:31.599811  882202 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 21:58:31.599849  882202 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 21:58:31.600462  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:58:31.618553  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 21:58:31.636236  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:58:31.652991  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 21:58:31.670100  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 21:58:31.688111  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 21:58:31.704992  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:58:31.722143  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 21:58:31.739291  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:58:31.758222  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 21:58:31.775854  882202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 21:58:31.792936  882202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:58:31.805466  882202 ssh_runner.go:195] Run: openssl version
	I1119 21:58:31.811539  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:58:31.819645  882202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:58:31.823228  882202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:58:31.823282  882202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:58:31.864264  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:58:31.872284  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 21:58:31.880588  882202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 21:58:31.884321  882202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 21:58:31.884386  882202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 21:58:31.925127  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 21:58:31.933045  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 21:58:31.941573  882202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 21:58:31.945248  882202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 21:58:31.945316  882202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 21:58:31.986389  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 21:58:31.994139  882202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:58:31.997784  882202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 21:58:32.041481  882202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 21:58:32.082978  882202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 21:58:32.123927  882202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 21:58:32.181579  882202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 21:58:32.252335  882202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 21:58:32.326521  882202 kubeadm.go:401] StartCluster: {Name:functional-642533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-642533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:58:32.326606  882202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:58:32.326670  882202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:58:32.424498  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 21:58:32.424510  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 21:58:32.424513  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 21:58:32.424516  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 21:58:32.424518  882202 cri.go:89] found id: "b0b5a6f74bad15e500f01a3339e488106cd0e421c9ff4d35be1c1a8b0891f957"
	I1119 21:58:32.424521  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 21:58:32.424523  882202 cri.go:89] found id: "92531d05270c8b22e006e8bd522a1d4e67d63205e03026e56314e4f39c13b942"
	I1119 21:58:32.424526  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 21:58:32.424528  882202 cri.go:89] found id: "cc0180407a6cc94dc0a0d9fd9e9d442d306b6610475cfc3659b6618df3cc8ddf"
	I1119 21:58:32.424534  882202 cri.go:89] found id: "2300ef70daf03699cb01a086c92cfc43c263c2bd799666b44a162a9c275de8d7"
	I1119 21:58:32.424536  882202 cri.go:89] found id: "10043ca6a8e55e3309b22b8f1273bc55c784b6d4921f2a887488b94d5e1ea74b"
	I1119 21:58:32.424538  882202 cri.go:89] found id: "f33f05523a53f8eb85ebacd192d0c535251597f53d10c1bd12f2aabf3a21ea16"
	I1119 21:58:32.424540  882202 cri.go:89] found id: "e62baa1de532696e74ae2dc3475b0c24acfada557d36f8e35b7b03201ad6a464"
	I1119 21:58:32.424543  882202 cri.go:89] found id: "ae6432bb5bd6839a3966198a79052ed2cc18beef6a6f0573bce8d60be2363f45"
	I1119 21:58:32.424545  882202 cri.go:89] found id: "5b570c9c8116c80c94fcab72213c4f7433902019fb2db0a3385cd894bc2097b4"
	I1119 21:58:32.424549  882202 cri.go:89] found id: "bb6431fd0e784a85c134c35d22fc3907b721665a24f3faefbf27d9b6cf71cfeb"
	I1119 21:58:32.424551  882202 cri.go:89] found id: "098a06d766ca5048bf4feadeb87b7dbb18143fbc25868fb064dba73740e6f8bb"
	I1119 21:58:32.424557  882202 cri.go:89] found id: "1f3d7b875860f3b3739ec07d3864f35a8f36040e19128ecf716b0be8d8fdb349"
	I1119 21:58:32.424559  882202 cri.go:89] found id: ""
	I1119 21:58:32.424610  882202 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 21:58:32.447931  882202 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:58:32Z" level=error msg="open /run/runc: no such file or directory"
	I1119 21:58:32.448024  882202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 21:58:32.466198  882202 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 21:58:32.466207  882202 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 21:58:32.466259  882202 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 21:58:32.481209  882202 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 21:58:32.481774  882202 kubeconfig.go:125] found "functional-642533" server: "https://192.168.49.2:8441"
	I1119 21:58:32.483190  882202 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 21:58:32.501010  882202 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-19 21:56:34.722420196 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-19 21:58:31.431825655 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1119 21:58:32.501018  882202 kubeadm.go:1161] stopping kube-system containers ...
	I1119 21:58:32.501030  882202 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1119 21:58:32.501102  882202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:58:32.600950  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 21:58:32.600962  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 21:58:32.600965  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 21:58:32.600968  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 21:58:32.600971  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 21:58:32.600974  882202 cri.go:89] found id: "b0b5a6f74bad15e500f01a3339e488106cd0e421c9ff4d35be1c1a8b0891f957"
	I1119 21:58:32.600976  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 21:58:32.600979  882202 cri.go:89] found id: "92531d05270c8b22e006e8bd522a1d4e67d63205e03026e56314e4f39c13b942"
	I1119 21:58:32.600981  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 21:58:32.600986  882202 cri.go:89] found id: "cc0180407a6cc94dc0a0d9fd9e9d442d306b6610475cfc3659b6618df3cc8ddf"
	I1119 21:58:32.600989  882202 cri.go:89] found id: "2300ef70daf03699cb01a086c92cfc43c263c2bd799666b44a162a9c275de8d7"
	I1119 21:58:32.600991  882202 cri.go:89] found id: "10043ca6a8e55e3309b22b8f1273bc55c784b6d4921f2a887488b94d5e1ea74b"
	I1119 21:58:32.600993  882202 cri.go:89] found id: "f33f05523a53f8eb85ebacd192d0c535251597f53d10c1bd12f2aabf3a21ea16"
	I1119 21:58:32.600996  882202 cri.go:89] found id: "e62baa1de532696e74ae2dc3475b0c24acfada557d36f8e35b7b03201ad6a464"
	I1119 21:58:32.600998  882202 cri.go:89] found id: ""
	I1119 21:58:32.601007  882202 cri.go:252] Stopping containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc b0b5a6f74bad15e500f01a3339e488106cd0e421c9ff4d35be1c1a8b0891f957 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2 92531d05270c8b22e006e8bd522a1d4e67d63205e03026e56314e4f39c13b942 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604 cc0180407a6cc94dc0a0d9fd9e9d442d306b6610475cfc3659b6618df3cc8ddf 2300ef70daf03699cb01a086c92cfc43c263c2bd799666b44a162a9c275de8d7 10043ca6a8e55e3309b22b8f1273bc55c784b6d4921f2a887488b94d5e1ea74b f33f05523a53f8eb85ebacd192d0c535251597f53d10c1bd12f2aabf3a21ea16 e62baa1de532696e74ae2dc3475b0c24acfada557d36f8e35b7b03201ad6a464]
	I1119 21:58:32.601065  882202 ssh_runner.go:195] Run: which crictl
	I1119 21:58:32.605102  882202 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc b0b5a6f74bad15e500f01a3339e488106cd0e421c9ff4d35be1c1a8b0891f957 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2 92531d05270c8b22e006e8bd522a1d4e67d63205e03026e56314e4f39c13b942 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604 cc0180407a6cc94dc0a0d9fd9e9d442d306b6610475cfc3659b6618df3cc8ddf 2300ef70daf03699cb01a086c92cfc43c263c2bd799666b44a162a9c275de8d7 10043ca6a8e55e3309b22b8f1273bc55c784b6d4921f2a887488b94d5e1ea74b f33f05523a53f8eb85ebacd192d0c535251597f53d10c1bd12f2aabf3a21ea16 e62baa1de532696e74ae2dc3475b0c24acfada557d36f8e35b7b03201ad6a464
	I1119 21:58:57.373533  882202 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc b0b5a6f74bad15e500f01a3339e488106cd0e421c9ff4d35be1c1a8b0891f957 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2 92531d05270c8b22e006e8bd522a1d4e67d63205e03026e56314e4f39c13b942 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604 cc0180407a6cc94dc0a0d9fd9e9d442d306b6610475cfc3659b6618df3cc8ddf 2300ef70daf03699cb01a086c92cfc43c263c2bd799666b44a162a9c275de8d7 10043ca6a8e55e3309b22b8f1273bc55c784b6d4921f2a887488b94d5e1ea74b f33f05523a53f8eb85ebacd192d0c535251597f53d10c1bd12f2aabf3a21ea16 e62baa1de532696e74ae2dc3475b0c24acfada557d36f8e35b7b03201ad6a464:
(24.768395139s)
	I1119 21:58:57.373608  882202 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1119 21:58:57.494195  882202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 21:58:57.502512  882202 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 19 21:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 19 21:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 19 21:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 19 21:56 /etc/kubernetes/scheduler.conf
	
	I1119 21:58:57.502577  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1119 21:58:57.510704  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1119 21:58:57.518456  882202 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1119 21:58:57.518511  882202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 21:58:57.526328  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1119 21:58:57.533957  882202 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1119 21:58:57.534012  882202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 21:58:57.541422  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1119 21:58:57.549309  882202 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1119 21:58:57.549372  882202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 21:58:57.556911  882202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 21:58:57.564728  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 21:58:57.616282  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 21:58:59.884062  882202 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.267755004s)
	I1119 21:58:59.884123  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1119 21:59:00.231333  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 21:59:00.372151  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1119 21:59:00.469262  882202 api_server.go:52] waiting for apiserver process to appear ...
	I1119 21:59:00.469358  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:00.969540  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:01.469768  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:01.969991  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:02.470321  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:02.969604  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:03.470400  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:03.970105  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:04.469435  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:04.970088  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:05.469456  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:05.970085  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:06.469817  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:06.969612  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:07.469785  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:07.969961  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:08.470153  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:08.970198  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:09.469578  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:09.970206  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:10.469550  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:10.970111  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:11.470133  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:11.969404  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:12.470166  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:12.969482  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:13.470045  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:13.969478  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:14.469778  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:14.970344  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:15.469563  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:15.969504  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:16.469812  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:16.970367  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:17.469485  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:17.969919  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:18.470438  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:18.969820  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:19.469486  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:19.969799  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:20.470158  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:20.969523  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:21.469750  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:21.970249  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:22.469480  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:22.969801  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:23.469475  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:23.969682  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:24.470161  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:24.970146  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:25.469490  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:25.969644  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:26.470040  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:26.969467  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:27.470044  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:27.969612  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:28.470165  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:28.969485  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:29.470241  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:29.969563  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:30.469585  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:30.969437  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:31.470340  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:31.969805  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:32.469503  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:32.969666  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:33.469488  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:33.969923  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:34.469679  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:34.970165  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:35.470174  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:35.969670  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:36.469791  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:36.969506  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:37.469552  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:37.969605  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:38.470064  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:38.969975  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:39.469546  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:39.970029  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:40.469498  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:40.969835  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:41.469474  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:41.970204  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:42.469881  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:42.969616  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:43.470291  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:43.969702  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:44.469807  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:44.969472  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:45.469867  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:45.969466  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:46.470368  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:46.970280  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:47.469528  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:47.969722  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:48.469532  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:48.969809  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:49.470375  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:49.969829  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:50.470339  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:50.970493  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:51.470040  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:51.970246  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:52.469571  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:52.970429  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:53.470144  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:53.969409  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:54.469450  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:54.970373  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:55.469481  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:55.969469  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:56.469687  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:56.970206  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:57.469387  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:57.970064  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:58.469507  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:58.969507  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:59.469496  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:59:59.969506  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:00.470131  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:00.470288  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:00.657120  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:00.657134  882202 cri.go:89] found id: ""
	I1119 22:00:00.657141  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:00.657220  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:00.693837  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:00.693916  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:00.771464  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:00.771478  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:00.771482  882202 cri.go:89] found id: ""
	I1119 22:00:00.771490  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:00.771554  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:00.787911  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:00.796533  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:00.796646  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:00.902322  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:00.902335  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:00.902349  882202 cri.go:89] found id: ""
	I1119 22:00:00.902357  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:00.902431  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:00.924375  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:00.961625  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:00.961719  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:01.058814  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:01.058827  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:01.058830  882202 cri.go:89] found id: ""
	I1119 22:00:01.058837  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:01.058954  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:01.087825  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:01.107171  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:01.107266  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:01.201987  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:01.202002  882202 cri.go:89] found id: ""
	I1119 22:00:01.202009  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:01.202081  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:01.207364  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:01.208129  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:01.318674  882202 cri.go:89] found id: "0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:01.318688  882202 cri.go:89] found id: ""
	I1119 22:00:01.318695  882202 logs.go:282] 1 containers: [0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6]
	I1119 22:00:01.318766  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:01.324408  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:01.324501  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:01.357815  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:01.357827  882202 cri.go:89] found id: ""
	I1119 22:00:01.357835  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:01.357902  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:01.365231  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:01.365299  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:01.417673  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:01.417686  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:01.417690  882202 cri.go:89] found id: ""
	I1119 22:00:01.417697  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:01.417780  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:01.422531  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:01.426983  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:01.427000  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:01.482232  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:01.482250  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:01.526383  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:01.526404  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:01.548062  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:01.548081  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:01.743288  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:01.743311  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:01.826771  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:01.826799  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:01.861870  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:01.861889  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:01.908846  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:01.908865  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:02.003086  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:02.003107  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:02.103308  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:02.103326  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:02.103338  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:02.150423  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:02.150442  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:02.186355  882202 logs.go:123] Gathering logs for kube-controller-manager [0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6] ...
	I1119 22:00:02.186371  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:02.225768  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:02.225785  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:02.333527  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:02.333548  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:02.368636  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:02.368653  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:02.413582  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:02.413599  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:02.453931  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:02.453949  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:05.038284  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:05.050190  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:05.050254  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:05.078631  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:05.078643  882202 cri.go:89] found id: ""
	I1119 22:00:05.078650  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:05.078706  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.082478  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:05.082542  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:05.115730  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:05.115742  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:05.115745  882202 cri.go:89] found id: ""
	I1119 22:00:05.115751  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:05.115808  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.119820  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.123680  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:05.123743  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:05.151951  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:05.151963  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:05.151966  882202 cri.go:89] found id: ""
	I1119 22:00:05.151972  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:05.152036  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.155965  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.159479  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:05.159541  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:05.186909  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:05.186925  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:05.186928  882202 cri.go:89] found id: ""
	I1119 22:00:05.186934  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:05.186989  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.190833  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.194377  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:05.194438  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:05.223562  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:05.223573  882202 cri.go:89] found id: ""
	I1119 22:00:05.223580  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:05.223635  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.227547  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:05.227622  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:05.254284  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:05.254296  882202 cri.go:89] found id: "0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:05.254299  882202 cri.go:89] found id: ""
	I1119 22:00:05.254306  882202 logs.go:282] 2 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386 0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6]
	I1119 22:00:05.254367  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.258137  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.261428  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:05.261536  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:05.288000  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:05.288012  882202 cri.go:89] found id: ""
	I1119 22:00:05.288023  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:05.288082  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.291778  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:05.291855  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:05.318558  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:05.318571  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:05.318575  882202 cri.go:89] found id: ""
	I1119 22:00:05.318582  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:05.318642  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.322579  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:05.326429  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:05.326447  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:05.423726  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:05.423748  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:05.494549  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:05.494558  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:05.494569  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:05.521019  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:05.521036  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:05.548319  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:05.548342  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:05.583270  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:05.583288  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:05.598738  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:05.598755  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:05.625928  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:05.625945  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:05.742013  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:05.742034  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:05.785817  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:05.785835  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:05.830637  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:05.830657  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:05.886959  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:05.886980  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:05.919591  882202 logs.go:123] Gathering logs for kube-controller-manager [0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6] ...
	I1119 22:00:05.919607  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:05.946413  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:05.946430  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:05.981757  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:05.981775  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:06.012158  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:06.012177  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:06.040948  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:06.040966  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:06.068335  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:06.068352  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:08.648013  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:08.659926  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:08.659984  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:08.691420  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:08.691431  882202 cri.go:89] found id: ""
	I1119 22:00:08.691437  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:08.691503  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.695804  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:08.695864  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:08.721954  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:08.721965  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:08.721969  882202 cri.go:89] found id: ""
	I1119 22:00:08.721975  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:08.722038  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.725740  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.729231  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:08.729294  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:08.758673  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:08.758685  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:08.758688  882202 cri.go:89] found id: ""
	I1119 22:00:08.758694  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:08.758749  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.762432  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.765900  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:08.765959  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:08.793429  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:08.793441  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:08.793444  882202 cri.go:89] found id: ""
	I1119 22:00:08.793459  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:08.793513  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.797793  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.801424  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:08.801541  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:08.827372  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:08.827383  882202 cri.go:89] found id: ""
	I1119 22:00:08.827389  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:08.827448  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.831215  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:08.831275  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:08.858063  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:08.858076  882202 cri.go:89] found id: "0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:08.858079  882202 cri.go:89] found id: ""
	I1119 22:00:08.858085  882202 logs.go:282] 2 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386 0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6]
	I1119 22:00:08.858152  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.862039  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.865703  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:08.865765  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:08.892544  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:08.892556  882202 cri.go:89] found id: ""
	I1119 22:00:08.892562  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:08.892617  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.896479  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:08.896539  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:08.922798  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:08.922810  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:08.922813  882202 cri.go:89] found id: ""
	I1119 22:00:08.922820  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:08.922950  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.926794  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:08.930552  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:08.930567  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:09.002061  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:09.002072  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:09.002086  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:09.033418  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:09.033435  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:09.064928  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:09.064951  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:09.095800  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:09.095820  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:09.141870  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:09.141889  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:09.174284  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:09.174301  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:09.225493  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:09.225514  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:09.258124  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:09.258140  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:09.285440  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:09.285461  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:09.370085  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:09.370112  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:09.473346  882202 logs.go:123] Gathering logs for kube-controller-manager [0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6] ...
	I1119 22:00:09.473368  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:09.501732  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:09.501750  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:09.528905  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:09.528922  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:09.571574  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:09.571593  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:09.587322  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:09.587340  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:09.700924  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:09.700944  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:09.745062  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:09.745078  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:12.274852  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:12.286217  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:12.286276  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:12.312846  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:12.312856  882202 cri.go:89] found id: ""
	I1119 22:00:12.312863  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:12.312958  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.316549  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:12.316607  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:12.345830  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:12.345842  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:12.345846  882202 cri.go:89] found id: ""
	I1119 22:00:12.345853  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:12.345911  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.349698  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.353537  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:12.353597  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:12.380743  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:12.380755  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:12.380759  882202 cri.go:89] found id: ""
	I1119 22:00:12.380765  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:12.380822  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.384679  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.388460  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:12.388528  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:12.415827  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:12.415838  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:12.415841  882202 cri.go:89] found id: ""
	I1119 22:00:12.415847  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:12.415907  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.419875  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.423538  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:12.423605  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:12.451298  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:12.451328  882202 cri.go:89] found id: ""
	I1119 22:00:12.451335  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:12.451394  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.455395  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:12.455465  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:12.482098  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:12.482115  882202 cri.go:89] found id: "0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:12.482119  882202 cri.go:89] found id: ""
	I1119 22:00:12.482125  882202 logs.go:282] 2 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386 0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6]
	I1119 22:00:12.482194  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.486055  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.489730  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:12.489794  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:12.518367  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:12.518378  882202 cri.go:89] found id: ""
	I1119 22:00:12.518384  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:12.518438  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.522236  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:12.522298  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:12.549265  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:12.549277  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:12.549280  882202 cri.go:89] found id: ""
	I1119 22:00:12.549287  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:12.549355  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.553136  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:12.556822  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:12.556839  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:12.604055  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:12.604075  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:12.633183  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:12.633199  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:12.684756  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:12.684775  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:12.712995  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:12.713011  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:12.728544  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:12.728561  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:12.755796  882202 logs.go:123] Gathering logs for kube-controller-manager [0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6] ...
	I1119 22:00:12.755812  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0cdd5b14335e1ac7a42632cc877e899f0ffc45132402170ba03dd323815305a6"
	I1119 22:00:12.781806  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:12.781823  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:12.811789  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:12.811805  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:12.837369  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:12.837386  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:12.866212  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:12.866231  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:12.892870  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:12.892887  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:12.926620  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:12.926639  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:13.007917  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:13.007940  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:13.042604  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:13.042624  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:13.146988  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:13.147009  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:13.217341  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:13.217350  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:13.217363  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:13.330542  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:13.330565  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:15.869509  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:15.880925  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:15.880987  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:15.908095  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:15.908107  882202 cri.go:89] found id: ""
	I1119 22:00:15.908113  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:15.908169  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:15.911975  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:15.912034  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:15.937976  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:15.937988  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:15.938001  882202 cri.go:89] found id: ""
	I1119 22:00:15.938009  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:15.938066  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:15.941934  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:15.945831  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:15.945910  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:15.972689  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:15.972701  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:15.972704  882202 cri.go:89] found id: ""
	I1119 22:00:15.972711  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:15.972769  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:15.976613  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:15.980430  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:15.980507  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:16.012627  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:16.012639  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:16.012646  882202 cri.go:89] found id: ""
	I1119 22:00:16.012653  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:16.012726  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:16.017747  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:16.022036  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:16.022103  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:16.051212  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:16.051224  882202 cri.go:89] found id: ""
	I1119 22:00:16.051232  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:16.051299  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:16.055282  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:16.055361  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:16.083475  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:16.083499  882202 cri.go:89] found id: ""
	I1119 22:00:16.083507  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:16.083571  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:16.087776  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:16.087843  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:16.114756  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:16.114778  882202 cri.go:89] found id: ""
	I1119 22:00:16.114786  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:16.114855  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:16.119481  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:16.119540  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:16.146618  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:16.146629  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:16.146633  882202 cri.go:89] found id: ""
	I1119 22:00:16.146639  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:16.146702  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:16.151984  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:16.155790  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:16.155807  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:16.183114  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:16.183130  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:16.213123  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:16.213140  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:16.310817  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:16.310839  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:16.381421  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:16.381430  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:16.381441  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:16.424442  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:16.424461  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:16.541077  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:16.541099  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:16.578375  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:16.578394  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:16.658711  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:16.658732  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:16.675636  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:16.675656  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:16.701212  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:16.701230  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:16.727255  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:16.727272  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:16.792591  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:16.792610  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:16.818564  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:16.818580  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:16.844794  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:16.844811  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:16.879149  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:16.879166  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:16.905760  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:16.905777  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:19.436797  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:19.450690  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:19.450749  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:19.478965  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:19.478984  882202 cri.go:89] found id: ""
	I1119 22:00:19.478991  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:19.479052  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.482824  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:19.482921  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:19.509661  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:19.509673  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:19.509676  882202 cri.go:89] found id: ""
	I1119 22:00:19.509682  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:19.509737  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.513397  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.516747  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:19.516817  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:19.546850  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:19.546862  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:19.546880  882202 cri.go:89] found id: ""
	I1119 22:00:19.546886  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:19.546942  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.550889  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.554398  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:19.554460  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:19.588732  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:19.588743  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:19.588746  882202 cri.go:89] found id: ""
	I1119 22:00:19.588752  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:19.588807  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.592745  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.596755  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:19.596819  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:19.624141  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:19.624153  882202 cri.go:89] found id: ""
	I1119 22:00:19.624160  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:19.624216  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.627840  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:19.627908  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:19.660604  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:19.660616  882202 cri.go:89] found id: ""
	I1119 22:00:19.660623  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:19.660680  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.664586  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:19.664658  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:19.694477  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:19.694488  882202 cri.go:89] found id: ""
	I1119 22:00:19.694495  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:19.694550  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.698526  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:19.698598  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:19.725062  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:19.725073  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:19.725076  882202 cri.go:89] found id: ""
	I1119 22:00:19.725082  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:19.725157  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.729013  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:19.732826  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:19.732841  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:19.859257  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:19.859279  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:19.903922  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:19.903942  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:19.931167  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:19.931184  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:19.957434  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:19.957450  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:20.058459  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:20.058490  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:20.134369  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:20.134380  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:20.134394  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:20.160778  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:20.160799  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:20.213051  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:20.213072  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:20.243906  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:20.243922  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:20.273705  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:20.273729  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:20.314065  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:20.314084  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:20.348580  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:20.348597  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:20.380918  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:20.380935  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:20.407280  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:20.407300  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:20.485390  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:20.485412  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:20.521860  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:20.521876  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:23.038718  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:23.050191  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:23.050249  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:23.077619  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:23.077631  882202 cri.go:89] found id: ""
	I1119 22:00:23.077638  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:23.077695  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.081331  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:23.081393  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:23.106694  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:23.106705  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:23.106708  882202 cri.go:89] found id: ""
	I1119 22:00:23.106715  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:23.106779  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.110646  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.114128  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:23.114186  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:23.140356  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:23.140368  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:23.140371  882202 cri.go:89] found id: ""
	I1119 22:00:23.140377  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:23.140436  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.145852  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.149529  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:23.149593  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:23.176565  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:23.176576  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:23.176579  882202 cri.go:89] found id: ""
	I1119 22:00:23.176585  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:23.176638  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.180363  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.183917  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:23.183985  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:23.210257  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:23.210268  882202 cri.go:89] found id: ""
	I1119 22:00:23.210275  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:23.210330  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.213926  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:23.213984  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:23.243022  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:23.243034  882202 cri.go:89] found id: ""
	I1119 22:00:23.243041  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:23.243094  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.246965  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:23.247026  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:23.281099  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:23.281111  882202 cri.go:89] found id: ""
	I1119 22:00:23.281117  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:23.281173  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.284995  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:23.285068  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:23.311661  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:23.311672  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:23.311680  882202 cri.go:89] found id: ""
	I1119 22:00:23.311686  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:23.311745  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.315412  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:23.318823  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:23.318839  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:23.416170  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:23.416192  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:23.496424  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:23.496436  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:23.496446  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:23.543215  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:23.543233  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:23.572392  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:23.572407  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:23.602654  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:23.602672  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:23.645948  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:23.645967  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:23.675890  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:23.675906  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:23.707544  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:23.707562  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:23.733350  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:23.733367  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:23.815353  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:23.815372  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:23.884296  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:23.884319  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:23.899582  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:23.899599  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:24.014092  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:24.014114  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:24.043187  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:24.043204  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:24.074487  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:24.074504  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:24.100625  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:24.100640  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:26.630579  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:26.642115  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:26.642175  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:26.668980  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:26.668991  882202 cri.go:89] found id: ""
	I1119 22:00:26.668998  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:26.669101  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.673218  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:26.673281  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:26.700836  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:26.700847  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:26.700850  882202 cri.go:89] found id: ""
	I1119 22:00:26.700856  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:26.700913  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.704863  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.708395  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:26.708479  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:26.735835  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:26.735848  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:26.735851  882202 cri.go:89] found id: ""
	I1119 22:00:26.735857  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:26.735913  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.739731  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.743321  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:26.743386  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:26.770982  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:26.770995  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:26.770999  882202 cri.go:89] found id: ""
	I1119 22:00:26.771005  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:26.771065  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.775036  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.778387  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:26.778450  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:26.805147  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:26.805159  882202 cri.go:89] found id: ""
	I1119 22:00:26.805165  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:26.805220  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.808993  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:26.809065  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:26.838994  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:26.839006  882202 cri.go:89] found id: ""
	I1119 22:00:26.839012  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:26.839069  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.843172  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:26.843234  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:26.874820  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:26.874845  882202 cri.go:89] found id: ""
	I1119 22:00:26.874853  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:26.874932  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.878792  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:26.878854  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:26.905664  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:26.905676  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:26.905680  882202 cri.go:89] found id: ""
	I1119 22:00:26.905686  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:26.905739  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.909604  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:26.913128  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:26.913143  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:26.942926  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:26.942942  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:26.969374  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:26.969390  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:26.985509  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:26.985526  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:27.013925  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:27.013942  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:27.044074  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:27.044090  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:27.121019  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:27.121117  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:27.194055  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:27.194068  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:27.194080  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:27.309094  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:27.309114  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:27.344304  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:27.344327  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:27.373744  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:27.373760  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:27.442507  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:27.442527  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:27.473717  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:27.473733  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:27.504654  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:27.504675  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:27.607257  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:27.607277  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:27.653868  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:27.653888  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:27.679953  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:27.679973  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:30.213046  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:30.224312  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:30.224371  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:30.255026  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:30.255038  882202 cri.go:89] found id: ""
	I1119 22:00:30.255044  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:30.255098  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.258780  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:30.258843  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:30.286319  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:30.286330  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:30.286334  882202 cri.go:89] found id: ""
	I1119 22:00:30.286340  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:30.286395  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.290698  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.294734  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:30.294801  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:30.320625  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:30.320637  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:30.320640  882202 cri.go:89] found id: ""
	I1119 22:00:30.320647  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:30.320701  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.324554  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.328100  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:30.328164  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:30.354372  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:30.354383  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:30.354386  882202 cri.go:89] found id: ""
	I1119 22:00:30.354393  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:30.354448  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.358067  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.361508  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:30.361577  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:30.390200  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:30.390211  882202 cri.go:89] found id: ""
	I1119 22:00:30.390218  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:30.390276  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.393898  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:30.393964  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:30.422313  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:30.422324  882202 cri.go:89] found id: ""
	I1119 22:00:30.422331  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:30.422384  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.425779  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:30.425834  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:30.460866  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:30.460877  882202 cri.go:89] found id: ""
	I1119 22:00:30.460883  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:30.460935  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.464436  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:30.464493  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:30.490248  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:30.490260  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:30.490263  882202 cri.go:89] found id: ""
	I1119 22:00:30.490269  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:30.490322  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.493904  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:30.497228  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:30.497240  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:30.573074  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:30.573093  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:30.610723  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:30.610739  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:30.667796  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:30.667816  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:30.702219  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:30.702236  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:30.747655  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:30.747673  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:30.775043  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:30.775061  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:30.802812  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:30.802828  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:30.829147  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:30.829164  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:30.858457  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:30.858473  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:30.885167  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:30.885184  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:30.986722  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:30.986743  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:31.108512  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:31.108534  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:31.147299  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:31.147317  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:31.180098  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:31.180117  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:31.195526  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:31.195542  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:31.267418  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:31.267427  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:31.267438  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:33.796871  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:33.810987  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:33.811057  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:33.837097  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:33.837108  882202 cri.go:89] found id: ""
	I1119 22:00:33.837114  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:33.837168  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.840813  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:33.840886  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:33.868108  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:33.868119  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:33.868123  882202 cri.go:89] found id: ""
	I1119 22:00:33.868129  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:33.868190  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.871919  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.875713  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:33.875774  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:33.904630  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:33.904641  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:33.904644  882202 cri.go:89] found id: ""
	I1119 22:00:33.904651  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:33.904734  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.908594  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.912322  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:33.912385  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:33.939485  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:33.939496  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:33.939516  882202 cri.go:89] found id: ""
	I1119 22:00:33.939525  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:33.939597  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.943973  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.947787  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:33.947851  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:33.974705  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:33.974716  882202 cri.go:89] found id: ""
	I1119 22:00:33.974722  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:33.974793  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:33.978734  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:33.978793  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:34.014369  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:34.014380  882202 cri.go:89] found id: ""
	I1119 22:00:34.014388  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:34.014488  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:34.018531  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:34.018594  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:34.046684  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:34.046696  882202 cri.go:89] found id: ""
	I1119 22:00:34.046702  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:34.046756  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:34.051413  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:34.051472  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:34.081797  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:34.081808  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:34.081811  882202 cri.go:89] found id: ""
	I1119 22:00:34.081817  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:34.081888  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:34.085741  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:34.089558  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:34.089573  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:34.118940  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:34.118966  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:34.157266  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:34.157285  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:34.273909  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:34.273938  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:34.318057  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:34.318078  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:34.377107  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:34.377128  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:34.404913  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:34.404932  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:34.431529  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:34.431546  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:34.518247  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:34.518266  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:34.555012  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:34.555031  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:34.594028  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:34.594045  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:34.621241  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:34.621258  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:34.650167  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:34.650184  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:34.676573  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:34.676589  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:34.773831  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:34.773852  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:34.844777  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:34.844787  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:34.844798  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:34.872724  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:34.872740  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:37.388694  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:37.400123  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:37.400185  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:37.430443  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:37.430455  882202 cri.go:89] found id: ""
	I1119 22:00:37.430461  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:37.430518  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.434091  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:37.434153  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:37.469834  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:37.469845  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:37.469848  882202 cri.go:89] found id: ""
	I1119 22:00:37.469855  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:37.469912  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.473681  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.477063  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:37.477126  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:37.504281  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:37.504292  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:37.504295  882202 cri.go:89] found id: ""
	I1119 22:00:37.504303  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:37.504358  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.508222  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.511908  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:37.511971  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:37.539694  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:37.539706  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:37.539709  882202 cri.go:89] found id: ""
	I1119 22:00:37.539716  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:37.539777  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.543786  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.547420  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:37.547478  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:37.578831  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:37.578842  882202 cri.go:89] found id: ""
	I1119 22:00:37.578849  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:37.578929  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.582772  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:37.582836  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:37.609054  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:37.609066  882202 cri.go:89] found id: ""
	I1119 22:00:37.609072  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:37.609126  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.612843  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:37.612908  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:37.643954  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:37.643965  882202 cri.go:89] found id: ""
	I1119 22:00:37.643971  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:37.644033  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.647729  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:37.647790  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:37.673440  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:37.673451  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:37.673455  882202 cri.go:89] found id: ""
	I1119 22:00:37.673461  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:37.673524  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.677136  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:37.680637  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:37.680651  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:37.746556  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:37.746566  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:37.746576  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:37.864815  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:37.864835  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:37.899976  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:37.899996  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:37.925371  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:37.925387  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:37.952345  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:37.952360  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:37.979439  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:37.979454  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:38.062446  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:38.062466  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:38.097846  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:38.097864  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:38.208215  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:38.208246  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:38.258626  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:38.258648  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:38.285711  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:38.285728  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:38.316860  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:38.316878  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:38.332199  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:38.332218  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:38.358654  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:38.358672  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:38.413854  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:38.413873  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:38.449550  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:38.449566  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:40.979445  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:40.990497  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:40.990558  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:41.024976  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:41.024988  882202 cri.go:89] found id: ""
	I1119 22:00:41.024995  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:41.025053  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.029085  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:41.029154  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:41.057744  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:41.057756  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:41.057759  882202 cri.go:89] found id: ""
	I1119 22:00:41.057766  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:41.057821  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.061777  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.066006  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:41.066099  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:41.092548  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:41.092560  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:41.092563  882202 cri.go:89] found id: ""
	I1119 22:00:41.092570  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:41.092626  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.096388  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.100093  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:41.100155  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:41.125624  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:41.125636  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:41.125639  882202 cri.go:89] found id: ""
	I1119 22:00:41.125646  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:41.125702  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.129475  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.132914  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:41.132977  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:41.160586  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:41.160598  882202 cri.go:89] found id: ""
	I1119 22:00:41.160605  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:41.160666  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.164696  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:41.164759  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:41.191368  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:41.191379  882202 cri.go:89] found id: ""
	I1119 22:00:41.191385  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:41.191447  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.195264  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:41.195329  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:41.221195  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:41.221206  882202 cri.go:89] found id: ""
	I1119 22:00:41.221212  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:41.221268  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.224981  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:41.225043  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:41.252050  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:41.252062  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:41.252066  882202 cri.go:89] found id: ""
	I1119 22:00:41.252072  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:41.252137  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.256055  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:41.259633  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:41.259648  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:41.274780  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:41.274797  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:41.300679  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:41.300699  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:41.330522  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:41.330538  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:41.357992  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:41.358010  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:41.394331  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:41.394348  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:41.419704  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:41.419721  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:41.473530  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:41.473548  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:41.500173  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:41.500189  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:41.525479  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:41.525513  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:41.626759  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:41.626779  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:41.744802  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:41.744821  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:41.825112  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:41.825131  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:41.862448  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:41.862464  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:41.933730  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:41.933740  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:41.933752  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:41.968977  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:41.968999  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:42.027910  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:42.027935  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:44.559035  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:44.570100  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:44.570160  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:44.595753  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:44.595765  882202 cri.go:89] found id: ""
	I1119 22:00:44.595773  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:44.595829  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.599548  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:44.599613  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:44.628902  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:44.628913  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:44.628917  882202 cri.go:89] found id: ""
	I1119 22:00:44.628923  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:44.628978  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.632987  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.636682  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:44.636752  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:44.668029  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:44.668040  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:44.668044  882202 cri.go:89] found id: ""
	I1119 22:00:44.668051  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:44.668124  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.671937  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.675464  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:44.675525  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:44.701133  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:44.701144  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:44.701147  882202 cri.go:89] found id: ""
	I1119 22:00:44.701154  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:44.701210  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.705019  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.708632  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:44.708709  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:44.734695  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:44.734707  882202 cri.go:89] found id: ""
	I1119 22:00:44.734713  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:44.734780  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.738657  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:44.738728  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:44.764776  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:44.764788  882202 cri.go:89] found id: ""
	I1119 22:00:44.764795  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:44.764864  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.768626  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:44.768689  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:44.794937  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:44.794949  882202 cri.go:89] found id: ""
	I1119 22:00:44.794955  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:44.795013  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.799601  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:44.799677  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:44.831313  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:44.831324  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:44.831328  882202 cri.go:89] found id: ""
	I1119 22:00:44.831343  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:44.831399  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.835160  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:44.838770  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:44.838785  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:44.897266  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:44.897287  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:44.923831  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:44.923848  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:44.951024  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:44.951041  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:44.981982  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:44.981998  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:45.040695  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:45.040730  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:45.106652  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:45.106676  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:45.174487  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:45.174510  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:45.216975  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:45.217024  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:45.259979  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:45.260007  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:45.347546  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:45.347567  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:45.384199  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:45.384215  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:45.486977  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:45.486998  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:45.534193  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:45.534214  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:45.565279  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:45.565296  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:45.580867  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:45.580883  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:45.651784  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:45.651889  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:45.651903  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:48.273669  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:48.284830  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:48.284896  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:48.311712  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:48.311723  882202 cri.go:89] found id: ""
	I1119 22:00:48.311730  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:48.311784  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.315417  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:48.315483  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:48.342184  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:48.342195  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:48.342198  882202 cri.go:89] found id: ""
	I1119 22:00:48.342205  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:48.342262  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.346125  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.349827  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:48.349891  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:48.379315  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:48.379326  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:48.379329  882202 cri.go:89] found id: ""
	I1119 22:00:48.379335  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:48.379390  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.383305  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.386955  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:48.387016  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:48.413420  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:48.413431  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:48.413435  882202 cri.go:89] found id: ""
	I1119 22:00:48.413441  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:48.413496  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.417291  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.421081  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:48.421161  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:48.455829  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:48.455841  882202 cri.go:89] found id: ""
	I1119 22:00:48.455857  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:48.455916  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.459649  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:48.459710  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:48.488301  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:48.488313  882202 cri.go:89] found id: ""
	I1119 22:00:48.488320  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:48.488388  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.492223  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:48.492301  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:48.522857  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:48.522890  882202 cri.go:89] found id: ""
	I1119 22:00:48.522897  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:48.522953  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.526776  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:48.526849  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:48.554150  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:48.554162  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:48.554165  882202 cri.go:89] found id: ""
	I1119 22:00:48.554171  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:48.554233  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.557961  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:48.561453  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:48.561469  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:48.659925  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:48.659948  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:48.693055  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:48.693073  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:48.747449  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:48.747471  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:48.775876  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:48.775892  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:48.801740  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:48.801756  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:48.832168  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:48.832185  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:48.945885  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:48.945905  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:48.991752  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:48.991772  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:49.022361  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:49.022377  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:49.102007  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:49.102031  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:49.134948  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:49.134966  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:49.166672  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:49.166688  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:49.193122  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:49.193140  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:49.219683  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:49.219699  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:49.249006  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:49.249022  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:49.263887  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:49.263905  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:49.333837  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:51.834431  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:51.845406  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:51.845461  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:51.880005  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:51.880016  882202 cri.go:89] found id: ""
	I1119 22:00:51.880024  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:51.880093  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:51.883793  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:51.883858  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:51.913406  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:51.913417  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:51.913420  882202 cri.go:89] found id: ""
	I1119 22:00:51.913426  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:51.913481  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:51.917025  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:51.920395  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:51.920463  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:51.947402  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:51.947414  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:51.947418  882202 cri.go:89] found id: ""
	I1119 22:00:51.947424  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:51.947491  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:51.951160  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:51.954735  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:51.954816  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:51.980983  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:51.980995  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:51.980999  882202 cri.go:89] found id: ""
	I1119 22:00:51.981007  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:51.981065  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:51.984734  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:51.988432  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:51.988505  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:52.024664  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:52.024676  882202 cri.go:89] found id: ""
	I1119 22:00:52.024684  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:52.024751  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:52.028944  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:52.029009  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:52.056274  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:52.056286  882202 cri.go:89] found id: ""
	I1119 22:00:52.056293  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:52.056348  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:52.060029  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:52.060093  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:52.086980  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:52.086992  882202 cri.go:89] found id: ""
	I1119 22:00:52.086998  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:52.087053  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:52.090752  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:52.090812  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:52.117384  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:52.117395  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:52.117398  882202 cri.go:89] found id: ""
	I1119 22:00:52.117405  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:52.117462  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:52.121395  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:52.125245  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:52.125267  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:52.226490  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:52.226511  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:52.343750  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:52.343770  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:52.380675  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:52.380693  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:52.406412  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:52.406431  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:52.432798  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:52.432814  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:52.473401  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:52.473417  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:52.498434  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:52.498450  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:52.534662  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:52.534688  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:52.550044  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:52.550063  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:52.597742  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:52.597761  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:52.627658  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:52.627676  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:52.704438  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:52.704463  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:52.741376  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:52.741392  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:52.809496  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:52.809506  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:52.809524  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:52.865818  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:52.865838  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:52.894316  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:52.894336  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:55.422412  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:55.434204  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:55.434266  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:55.469849  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:55.469860  882202 cri.go:89] found id: ""
	I1119 22:00:55.469867  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:55.469925  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.474058  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:55.474125  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:55.501542  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:55.501555  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:55.501559  882202 cri.go:89] found id: ""
	I1119 22:00:55.501565  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:55.501620  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.505336  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.509320  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:55.509387  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:55.539316  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:55.539328  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:55.539331  882202 cri.go:89] found id: ""
	I1119 22:00:55.539337  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:55.539393  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.543609  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.547479  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:55.547548  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:55.575841  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:55.575854  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:55.575857  882202 cri.go:89] found id: ""
	I1119 22:00:55.575863  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:55.575921  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.580197  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.584146  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:55.584226  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:55.612273  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:55.612285  882202 cri.go:89] found id: ""
	I1119 22:00:55.612292  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:55.612355  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.616506  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:55.616566  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:55.644772  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:55.644783  882202 cri.go:89] found id: ""
	I1119 22:00:55.644790  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:55.644852  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.648529  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:55.648599  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:55.675874  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:55.675885  882202 cri.go:89] found id: ""
	I1119 22:00:55.675891  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:55.675956  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.679644  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:55.679742  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:55.711194  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:55.711204  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:55.711207  882202 cri.go:89] found id: ""
	I1119 22:00:55.711214  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:55.711273  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.715021  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:55.718705  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:55.718720  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:55.745098  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:55.745120  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:55.776877  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:55.776894  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:55.834595  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:00:55.834615  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:55.862485  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:55.862501  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:55.959632  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:55.959653  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:55.976419  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:55.976436  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:56.012564  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:56.012581  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:56.068393  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:56.068414  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:56.097438  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:56.097458  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:56.126627  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:56.126642  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:56.214001  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:00:56.214020  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:00:56.249117  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:56.249142  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:56.318803  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:56.318812  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:00:56.318825  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:56.346805  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:56.346821  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:56.379925  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:00:56.379944  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:56.411382  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:56.411397  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:59.051013  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:00:59.061773  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:00:59.061835  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:00:59.089637  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:59.089648  882202 cri.go:89] found id: ""
	I1119 22:00:59.089655  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:00:59.089708  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.093412  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:00:59.093474  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:00:59.120416  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:00:59.120427  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:59.120430  882202 cri.go:89] found id: ""
	I1119 22:00:59.120436  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:00:59.120494  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.124252  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.127851  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:00:59.127926  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:00:59.159694  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:59.159705  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:59.159708  882202 cri.go:89] found id: ""
	I1119 22:00:59.159715  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:00:59.159772  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.163503  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.167331  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:00:59.167407  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:00:59.194515  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:59.194526  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:59.194530  882202 cri.go:89] found id: ""
	I1119 22:00:59.194536  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:00:59.194593  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.198401  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.201990  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:00:59.202047  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:00:59.229329  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:00:59.229341  882202 cri.go:89] found id: ""
	I1119 22:00:59.229348  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:00:59.229402  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.233064  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:00:59.233127  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:00:59.260128  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:59.260140  882202 cri.go:89] found id: ""
	I1119 22:00:59.260147  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:00:59.260204  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.263955  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:00:59.264016  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:00:59.289545  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:00:59.289557  882202 cri.go:89] found id: ""
	I1119 22:00:59.289564  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:00:59.289620  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.293258  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:00:59.293318  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:00:59.319663  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:59.319674  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:00:59.319678  882202 cri.go:89] found id: ""
	I1119 22:00:59.319685  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:00:59.319741  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.323558  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:00:59.327017  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:00:59.327030  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:00:59.403658  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:00:59.403680  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:00:59.520546  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:00:59.520567  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:00:59.546786  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:00:59.546803  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:00:59.625594  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:00:59.625633  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:00:59.658175  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:00:59.658193  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:00:59.673642  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:00:59.673658  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:00:59.742575  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:00:59.742585  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:00:59.742596  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:00:59.786812  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:00:59.786832  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:00:59.812924  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:00:59.812944  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:00:59.840079  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:00:59.840095  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:00:59.871969  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:00:59.871986  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:00:59.970152  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:00:59.970173  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:00.093240  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:00.093270  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:00.264413  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:00.264435  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:00.352333  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:00.352363  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:00.417264  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:00.417283  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:02.968261  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:02.980377  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:02.980456  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:03.011779  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:03.011791  882202 cri.go:89] found id: ""
	I1119 22:01:03.011798  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:03.011858  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.015851  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:03.015915  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:03.046076  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:03.046087  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:03.046091  882202 cri.go:89] found id: ""
	I1119 22:01:03.046097  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:03.046155  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.050090  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.054000  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:03.054062  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:03.081357  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:03.081369  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:03.081372  882202 cri.go:89] found id: ""
	I1119 22:01:03.081379  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:03.081442  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.085194  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.088938  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:03.089003  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:03.116236  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:03.116248  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:03.116251  882202 cri.go:89] found id: ""
	I1119 22:01:03.116258  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:03.116317  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.120176  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.123843  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:03.123904  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:03.150273  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:03.150285  882202 cri.go:89] found id: ""
	I1119 22:01:03.150293  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:03.150360  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.154771  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:03.154845  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:03.181943  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:03.181955  882202 cri.go:89] found id: ""
	I1119 22:01:03.181961  882202 logs.go:282] 1 containers: [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:01:03.182017  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.186361  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:03.186421  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:03.212804  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:03.212831  882202 cri.go:89] found id: ""
	I1119 22:01:03.212838  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:03.212894  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.216548  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:03.216610  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:03.248566  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:03.248577  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:03.248580  882202 cri.go:89] found id: ""
	I1119 22:01:03.248587  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:03.248641  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.252165  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:03.255505  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:03.255519  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:03.325511  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:03.325522  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:03.325551  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:03.464819  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:03.464841  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:03.530097  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:03.530117  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:03.586773  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:03.586794  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:03.612356  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:03.612405  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:03.644842  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:03.644859  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:03.750903  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:03.750923  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:03.790952  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:03.790969  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:03.818917  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:01:03.818934  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:03.845283  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:03.845297  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:03.924976  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:03.924996  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:03.956054  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:03.956071  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:03.984545  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:03.984560  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:04.017984  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:04.018001  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:04.033213  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:04.033233  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:04.060698  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:04.060714  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:06.591468  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:06.603872  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:06.603930  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:06.633141  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:06.633152  882202 cri.go:89] found id: ""
	I1119 22:01:06.633159  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:06.633212  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.637393  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:06.637457  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:06.682768  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:06.682779  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:06.682782  882202 cri.go:89] found id: ""
	I1119 22:01:06.682788  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:06.682851  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.688271  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.692092  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:06.692152  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:06.722384  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:06.722394  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:06.722397  882202 cri.go:89] found id: ""
	I1119 22:01:06.722416  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:06.722477  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.726827  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.731041  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:06.731101  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:06.764405  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:06.764415  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:06.764419  882202 cri.go:89] found id: ""
	I1119 22:01:06.764425  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:06.764494  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.769941  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.774035  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:06.774092  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:06.805510  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:06.805522  882202 cri.go:89] found id: ""
	I1119 22:01:06.805528  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:06.805592  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.809769  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:06.809828  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:06.845350  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:06.845362  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:06.845365  882202 cri.go:89] found id: ""
	I1119 22:01:06.845371  882202 logs.go:282] 2 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:01:06.845436  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.849479  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.853465  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:06.853525  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:06.882503  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:06.882515  882202 cri.go:89] found id: ""
	I1119 22:01:06.882521  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:06.882575  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.891933  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:06.891998  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:06.919330  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:06.919341  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:06.919344  882202 cri.go:89] found id: ""
	I1119 22:01:06.919351  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:06.919407  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.923655  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:06.927619  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:06.927634  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:06.943051  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:06.943069  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:07.011409  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:07.011434  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:07.042090  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:07.042106  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:07.093007  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:07.093025  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:07.201563  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:07.201585  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:07.304119  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:07.304128  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:07.304140  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:07.357809  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:07.357828  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:07.387222  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:07.387239  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:07.433005  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:07.433024  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:07.471452  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:07.471469  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:07.556802  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:07.556824  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:07.587415  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:07.587431  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:07.650458  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:07.650478  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:07.678676  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:07.678691  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:07.796566  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:07.796587  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:07.828133  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:07.828150  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:07.866995  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:01:07.867020  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:10.402577  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:10.415403  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:10.415471  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:10.451074  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:10.451085  882202 cri.go:89] found id: ""
	I1119 22:01:10.451092  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:10.451153  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.455160  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:10.455229  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:10.486468  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:10.486479  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:10.486483  882202 cri.go:89] found id: ""
	I1119 22:01:10.486489  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:10.486560  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.490265  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.493775  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:10.493847  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:10.519003  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:10.519014  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:10.519017  882202 cri.go:89] found id: ""
	I1119 22:01:10.519023  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:10.519077  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.522929  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.526337  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:10.526400  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:10.553512  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:10.553523  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:10.553526  882202 cri.go:89] found id: ""
	I1119 22:01:10.553533  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:10.553597  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.557344  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.560693  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:10.560753  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:10.590125  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:10.590137  882202 cri.go:89] found id: ""
	I1119 22:01:10.590144  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:10.590200  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.593990  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:10.594051  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:10.622995  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:10.623006  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:10.623009  882202 cri.go:89] found id: ""
	I1119 22:01:10.623015  882202 logs.go:282] 2 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:01:10.623068  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.627245  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.631403  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:10.631461  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:10.657759  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:10.657770  882202 cri.go:89] found id: ""
	I1119 22:01:10.657776  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:10.657829  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.661366  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:10.661427  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:10.686602  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:10.686613  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:10.686617  882202 cri.go:89] found id: ""
	I1119 22:01:10.686623  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:10.686679  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.690590  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:10.694224  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:10.694239  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:10.727409  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:10.727424  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:10.753214  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:01:10.753229  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:10.778778  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:10.778795  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:10.804029  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:10.804045  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:10.841274  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:10.841292  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:10.912418  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:10.912428  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:10.912438  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:10.938340  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:10.938357  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:10.969807  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:10.969824  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:11.038281  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:11.038302  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:11.054779  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:11.054795  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:11.170262  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:11.170283  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:11.201857  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:11.201874  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:11.235880  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:11.235899  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:11.336948  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:11.336975  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:11.376980  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:11.376997  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:11.422207  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:11.422227  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:11.449688  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:11.449704  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:14.031027  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:14.042822  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:14.042918  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:14.078043  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:14.078055  882202 cri.go:89] found id: ""
	I1119 22:01:14.078061  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:14.078118  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.081951  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:14.082013  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:14.108795  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:14.108807  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:14.108810  882202 cri.go:89] found id: ""
	I1119 22:01:14.108817  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:14.108871  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.112771  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.116438  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:14.116500  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:14.141917  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:14.141928  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:14.141930  882202 cri.go:89] found id: ""
	I1119 22:01:14.141936  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:14.141990  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.145944  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.149656  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:14.149715  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:14.176066  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:14.176078  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:14.176081  882202 cri.go:89] found id: ""
	I1119 22:01:14.176087  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:14.176141  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.179858  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.183745  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:14.183810  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:14.209861  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:14.209872  882202 cri.go:89] found id: ""
	I1119 22:01:14.209878  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:14.209935  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.213626  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:14.213686  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:14.240278  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:14.240289  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:14.240292  882202 cri.go:89] found id: ""
	I1119 22:01:14.240299  882202 logs.go:282] 2 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:01:14.240357  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.244108  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.247632  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:14.247706  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:14.273600  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:14.273611  882202 cri.go:89] found id: ""
	I1119 22:01:14.273617  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:14.273672  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.277352  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:14.277416  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:14.307712  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:14.307724  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:14.307727  882202 cri.go:89] found id: ""
	I1119 22:01:14.307733  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:14.307808  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.311569  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:14.315073  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:14.315088  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:14.341971  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:14.341987  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:14.370887  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:14.370903  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:14.439849  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:14.439858  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:14.439870  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:14.465983  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:14.465999  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:14.509912  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:01:14.509929  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:14.535341  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:14.535357  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:14.581543  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:14.581568  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:14.687730  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:14.687754  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:14.703008  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:14.703026  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:14.732609  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:14.732628  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:14.759541  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:14.759557  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:14.784670  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:14.784686  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:14.861627  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:14.861647  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:14.892981  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:14.892998  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:15.005353  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:15.005375  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:15.065236  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:15.065255  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:15.094088  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:15.094105  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:17.653417  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:17.664592  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:17.664648  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:17.695831  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:17.695842  882202 cri.go:89] found id: ""
	I1119 22:01:17.695848  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:17.695902  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.699809  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:17.699886  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:17.725960  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:17.725971  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:17.725975  882202 cri.go:89] found id: ""
	I1119 22:01:17.725983  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:17.726037  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.729804  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.733442  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:17.733507  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:17.759374  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:17.759386  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:17.759389  882202 cri.go:89] found id: ""
	I1119 22:01:17.759395  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:17.759446  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.763072  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.766459  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:17.766530  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:17.792982  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:17.792994  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:17.792997  882202 cri.go:89] found id: ""
	I1119 22:01:17.793003  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:17.793057  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.796880  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.800825  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:17.800894  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:17.826472  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:17.826483  882202 cri.go:89] found id: ""
	I1119 22:01:17.826489  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:17.826545  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.830226  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:17.830287  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:17.856339  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:17.856351  882202 cri.go:89] found id: "d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:17.856363  882202 cri.go:89] found id: ""
	I1119 22:01:17.856369  882202 logs.go:282] 2 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386]
	I1119 22:01:17.856425  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.860303  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.863880  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:17.863945  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:17.890141  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:17.890152  882202 cri.go:89] found id: ""
	I1119 22:01:17.890158  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:17.890213  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.894002  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:17.894063  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:17.921927  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:17.921938  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:17.921941  882202 cri.go:89] found id: ""
	I1119 22:01:17.921947  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:17.922003  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.925814  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:17.929310  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:17.929325  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:17.954242  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:17.954258  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:17.979831  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:17.979848  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:18.050121  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:18.050142  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:18.077206  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:18.077224  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:18.104277  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:18.104295  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:18.182835  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:18.182854  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:18.232506  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:18.232523  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:18.334533  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:18.334555  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:18.450503  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:18.450524  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:18.486289  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:18.486306  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:18.514804  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:18.514823  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:18.590251  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:18.590262  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:18.590274  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:18.635244  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:18.635264  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:18.661485  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:18.661500  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:18.686447  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:18.686463  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:18.717280  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:18.717297  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:18.733161  882202 logs.go:123] Gathering logs for kube-controller-manager [d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386] ...
	I1119 22:01:18.733177  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d63e0a4512219fc3156df9afb7c23cc645a173e4c61be77eb7131fac18f11386"
	I1119 22:01:21.261077  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:21.272271  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:21.272328  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:21.298645  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:21.298656  882202 cri.go:89] found id: ""
	I1119 22:01:21.298663  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:21.298718  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.302228  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:21.302289  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:21.327685  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:21.327710  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:21.327713  882202 cri.go:89] found id: ""
	I1119 22:01:21.327719  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:21.327785  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.331469  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.335016  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:21.335074  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:21.365964  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:21.365976  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:21.365980  882202 cri.go:89] found id: ""
	I1119 22:01:21.365986  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:21.366041  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.369749  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.373019  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:21.373080  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:21.397939  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:21.397950  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:21.397954  882202 cri.go:89] found id: ""
	I1119 22:01:21.397968  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:21.398028  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.401494  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.404748  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:21.404802  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:21.429582  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:21.429593  882202 cri.go:89] found id: ""
	I1119 22:01:21.429599  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:21.429664  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.432921  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:21.432973  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:21.465658  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:21.465669  882202 cri.go:89] found id: ""
	I1119 22:01:21.465675  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:21.465727  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.469390  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:21.469460  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:21.495279  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:21.495291  882202 cri.go:89] found id: ""
	I1119 22:01:21.495298  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:21.495364  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.499046  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:21.499115  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:21.524627  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:21.524638  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:21.524647  882202 cri.go:89] found id: ""
	I1119 22:01:21.524653  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:21.524705  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.528309  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:21.531793  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:21.531806  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:21.629546  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:21.629570  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:21.647429  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:21.647447  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:21.761447  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:21.761469  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:21.787858  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:21.787874  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:21.846719  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:21.846738  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:21.917569  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:21.917581  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:21.917592  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:21.953260  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:21.953282  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:21.980250  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:21.980267  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:22.023297  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:22.023315  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:22.073299  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:22.073319  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:22.103246  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:22.103264  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:22.147074  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:22.147095  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:22.175071  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:22.175088  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:22.202915  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:22.202932  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:22.228908  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:22.228925  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:22.255589  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:22.255606  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:24.833799  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:24.845006  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:24.845067  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:24.870553  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:24.870564  882202 cri.go:89] found id: ""
	I1119 22:01:24.870570  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:24.870628  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:24.874283  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:24.874342  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:24.901425  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:24.901436  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:24.901440  882202 cri.go:89] found id: ""
	I1119 22:01:24.901446  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:24.901501  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:24.905290  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:24.908773  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:24.908885  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:24.935782  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:24.935793  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:24.935796  882202 cri.go:89] found id: ""
	I1119 22:01:24.935802  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:24.935861  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:24.939793  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:24.943492  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:24.943566  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:24.970900  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:24.970911  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:24.970914  882202 cri.go:89] found id: ""
	I1119 22:01:24.970921  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:24.970980  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:24.974537  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:24.978029  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:24.978090  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:25.009041  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:25.009055  882202 cri.go:89] found id: ""
	I1119 22:01:25.009062  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:25.009152  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:25.014930  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:25.014998  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:25.047696  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:25.047708  882202 cri.go:89] found id: ""
	I1119 22:01:25.047715  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:25.047773  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:25.051521  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:25.051586  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:25.083420  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:25.083439  882202 cri.go:89] found id: ""
	I1119 22:01:25.083447  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:25.083507  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:25.087320  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:25.087384  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:25.112472  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:25.112484  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:25.112487  882202 cri.go:89] found id: ""
	I1119 22:01:25.112493  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:25.112551  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:25.116501  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:25.119943  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:25.119959  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:25.150773  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:25.150789  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:25.178209  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:25.178225  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:25.229262  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:25.229279  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:25.257765  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:25.257782  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:25.287237  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:25.287255  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:25.384767  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:25.384788  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:25.448693  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:25.448712  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:25.477005  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:25.477023  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:25.503063  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:25.503079  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:25.587691  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:25.587720  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:25.603726  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:25.603746  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:25.668650  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:25.668676  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:25.668687  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:25.782030  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:25.782055  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:25.819197  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:25.819215  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:25.853475  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:25.853492  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:25.901607  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:25.901628  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:28.428933  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:28.440149  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:28.440211  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:28.467647  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:28.467657  882202 cri.go:89] found id: ""
	I1119 22:01:28.467664  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:28.467718  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.471621  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:28.471684  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:28.498114  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:28.498125  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:28.498129  882202 cri.go:89] found id: ""
	I1119 22:01:28.498135  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:28.498191  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.502061  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.505720  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:28.505795  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:28.531936  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:28.531946  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:28.531949  882202 cri.go:89] found id: ""
	I1119 22:01:28.531956  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:28.532010  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.535633  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.539204  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:28.539265  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:28.564496  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:28.564507  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:28.564511  882202 cri.go:89] found id: ""
	I1119 22:01:28.564517  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:28.564571  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.568669  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.572219  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:28.572289  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:28.597725  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:28.597735  882202 cri.go:89] found id: ""
	I1119 22:01:28.597742  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:28.597811  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.601528  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:28.601605  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:28.627693  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:28.627704  882202 cri.go:89] found id: ""
	I1119 22:01:28.627711  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:28.627761  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.631373  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:28.631450  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:28.660385  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:28.660396  882202 cri.go:89] found id: ""
	I1119 22:01:28.660403  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:28.660456  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.664035  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:28.664094  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:28.689771  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:28.689782  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:28.689786  882202 cri.go:89] found id: ""
	I1119 22:01:28.689792  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:28.689847  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.693497  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:28.696809  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:28.696825  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:28.732587  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:28.732605  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:28.812164  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:28.812184  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:28.850606  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:28.850628  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:28.876780  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:28.876797  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:28.938848  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:28.938879  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:28.965552  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:28.965586  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:29.066083  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:29.066104  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:29.097399  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:29.097416  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:29.124627  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:29.124644  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:29.139953  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:29.139969  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:29.207762  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:29.207783  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:29.207794  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:29.252991  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:29.253010  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:29.285398  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:29.285415  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:29.402280  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:29.402302  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:29.427540  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:29.427556  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:29.460045  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:29.460063  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:31.984573  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:31.995803  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:31.995862  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:32.024339  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:32.024351  882202 cri.go:89] found id: ""
	I1119 22:01:32.024358  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:32.024419  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.028308  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:32.028388  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:32.061089  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:32.061102  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:32.061105  882202 cri.go:89] found id: ""
	I1119 22:01:32.061111  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:32.061167  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.065052  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.068720  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:32.068798  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:32.099143  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:32.099155  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:32.099159  882202 cri.go:89] found id: ""
	I1119 22:01:32.099168  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:32.099226  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.102941  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.106427  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:32.106504  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:32.133639  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:32.133651  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:32.133654  882202 cri.go:89] found id: ""
	I1119 22:01:32.133661  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:32.133718  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.137626  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.140956  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:32.141028  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:32.168257  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:32.168268  882202 cri.go:89] found id: ""
	I1119 22:01:32.168274  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:32.168327  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.172014  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:32.172089  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:32.198704  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:32.198715  882202 cri.go:89] found id: ""
	I1119 22:01:32.198722  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:32.198779  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.202466  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:32.202523  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:32.228319  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:32.228330  882202 cri.go:89] found id: ""
	I1119 22:01:32.228337  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:32.228405  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.232188  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:32.232249  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:32.258032  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:32.258044  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:32.258047  882202 cri.go:89] found id: ""
	I1119 22:01:32.258053  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:32.258110  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.261772  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:32.265126  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:32.265140  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:32.289702  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:32.289718  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:32.319167  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:32.319182  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:32.396038  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:32.396060  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:32.512906  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:32.512930  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:32.573670  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:32.573691  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:32.602305  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:32.602324  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:32.627938  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:32.627953  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:32.725411  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:32.725436  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:32.741672  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:32.741690  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:32.770914  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:32.770931  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:32.803891  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:32.803910  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:32.835128  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:32.835145  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:32.869305  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:32.869324  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:32.917074  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:32.917092  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:32.945044  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:32.945063  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:32.974368  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:32.974384  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:33.047007  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:35.547259  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:35.557952  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:35.558012  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:35.583253  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:35.583264  882202 cri.go:89] found id: ""
	I1119 22:01:35.583270  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:35.583327  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.586820  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:35.586911  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:35.613333  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:35.613345  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:35.613349  882202 cri.go:89] found id: ""
	I1119 22:01:35.613356  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:35.613411  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.617036  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.620479  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:35.620544  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:35.651764  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:35.651775  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:35.651778  882202 cri.go:89] found id: ""
	I1119 22:01:35.651784  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:35.651838  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.655629  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.659142  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:35.659208  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:35.684701  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:35.684713  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:35.684716  882202 cri.go:89] found id: ""
	I1119 22:01:35.684723  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:35.684779  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.688592  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.692286  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:35.692381  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:35.718987  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:35.718998  882202 cri.go:89] found id: ""
	I1119 22:01:35.719005  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:35.719060  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.722625  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:35.722684  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:35.755084  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:35.755101  882202 cri.go:89] found id: ""
	I1119 22:01:35.755108  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:35.755164  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.758828  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:35.758923  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:35.785818  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:35.785830  882202 cri.go:89] found id: ""
	I1119 22:01:35.785837  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:35.785887  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.789500  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:35.789561  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:35.816355  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:35.816367  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:35.816370  882202 cri.go:89] found id: ""
	I1119 22:01:35.816376  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:35.816434  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.820220  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:35.823851  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:35.823866  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:35.838798  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:35.838816  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:35.880257  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:35.880275  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:35.911204  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:35.911221  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:35.936678  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:35.936694  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:35.970121  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:35.970140  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:36.044188  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:36.044198  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:36.044215  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:36.073531  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:36.073549  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:36.132753  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:36.132773  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:36.214625  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:36.214646  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:36.262151  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:36.262169  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:36.288443  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:36.288460  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:36.317596  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:36.317612  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:36.361276  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:36.361297  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:36.469755  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:36.469777  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:36.591048  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:36.591068  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:36.618350  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:36.618366  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:39.147226  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:39.158250  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:39.158309  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:39.186256  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:39.186267  882202 cri.go:89] found id: ""
	I1119 22:01:39.186274  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:39.186333  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.190083  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:39.190152  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:39.215714  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:39.215726  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:39.215729  882202 cri.go:89] found id: ""
	I1119 22:01:39.215735  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:39.215790  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.227187  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.230815  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:39.231046  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:39.257369  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:39.257381  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:39.257384  882202 cri.go:89] found id: ""
	I1119 22:01:39.257390  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:39.257446  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.261214  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.264719  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:39.264782  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:39.291725  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:39.291737  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:39.291741  882202 cri.go:89] found id: ""
	I1119 22:01:39.291747  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:39.291807  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.295602  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.299291  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:39.299354  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:39.325953  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:39.325966  882202 cri.go:89] found id: ""
	I1119 22:01:39.325973  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:39.326042  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.329682  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:39.329744  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:39.356210  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:39.356221  882202 cri.go:89] found id: ""
	I1119 22:01:39.356228  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:39.356297  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.360128  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:39.360204  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:39.387238  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:39.387252  882202 cri.go:89] found id: ""
	I1119 22:01:39.387259  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:39.387313  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.391149  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:39.391222  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:39.417008  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:39.417020  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:39.417023  882202 cri.go:89] found id: ""
	I1119 22:01:39.417029  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:39.417087  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.420821  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:39.424365  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:39.424379  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:39.453951  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:39.453970  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:39.499715  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:39.499731  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:39.596965  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:39.596985  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:39.640941  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:39.640958  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:39.668632  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:39.668648  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:39.698985  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:39.699005  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:39.725737  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:39.725753  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:39.800420  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:39.800431  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:39.800443  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:39.874129  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:39.874151  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:39.940689  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:39.940710  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:39.969777  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:39.969793  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:40.057125  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:40.057149  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:40.094267  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:40.094287  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:40.121308  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:40.121327  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:40.136688  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:40.136705  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:40.260914  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:40.260935  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:42.786239  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:42.797483  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:42.797542  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:42.827412  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:42.827424  882202 cri.go:89] found id: ""
	I1119 22:01:42.827431  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:42.827488  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.830994  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:42.831056  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:42.856905  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:42.856918  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:42.856921  882202 cri.go:89] found id: ""
	I1119 22:01:42.856927  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:42.856986  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.860833  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.864515  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:42.864585  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:42.895743  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:42.895755  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:42.895758  882202 cri.go:89] found id: ""
	I1119 22:01:42.895765  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:42.895822  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.899760  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.903528  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:42.903593  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:42.932724  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:42.932735  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:42.932739  882202 cri.go:89] found id: ""
	I1119 22:01:42.932745  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:42.932802  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.936549  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.940182  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:42.940252  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:42.971497  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:42.971510  882202 cri.go:89] found id: ""
	I1119 22:01:42.971517  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:42.971573  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:42.975205  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:42.975266  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:43.003554  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:43.003570  882202 cri.go:89] found id: ""
	I1119 22:01:43.003577  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:43.003652  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:43.008884  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:43.008965  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:43.039622  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:43.039634  882202 cri.go:89] found id: ""
	I1119 22:01:43.039640  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:43.039699  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:43.043732  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:43.043798  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:43.071926  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:43.071938  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:43.071941  882202 cri.go:89] found id: ""
	I1119 22:01:43.071948  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:43.072004  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:43.075984  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:43.079604  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:43.079618  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:43.105647  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:43.105664  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:43.132383  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:43.132402  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:43.158637  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:43.158653  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:43.225995  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:43.226004  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:43.226020  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:43.260487  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:43.260506  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:43.326113  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:43.326134  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:43.353861  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:43.353877  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:43.430391  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:43.430410  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:43.551674  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:43.551698  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:43.597315  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:43.597336  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:43.642960  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:43.642978  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:43.673830  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:43.673846  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:43.770310  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:43.770329  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:43.797603  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:43.797618  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:43.824353  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:43.824369  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:43.854169  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:43.854185  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:46.371020  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:46.382232  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:46.382292  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:46.411088  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:46.411101  882202 cri.go:89] found id: ""
	I1119 22:01:46.411108  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:46.411171  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.414971  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:46.415037  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:46.449941  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:46.449954  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:46.449957  882202 cri.go:89] found id: ""
	I1119 22:01:46.449964  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:46.450021  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.453738  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.457490  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:46.457555  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:46.487360  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:46.487374  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:46.487378  882202 cri.go:89] found id: ""
	I1119 22:01:46.487384  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:46.487439  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.491323  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.495035  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:46.495097  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:46.521203  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:46.521215  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:46.521219  882202 cri.go:89] found id: ""
	I1119 22:01:46.521226  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:46.521284  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.524935  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.528506  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:46.528575  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:46.555943  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:46.555955  882202 cri.go:89] found id: ""
	I1119 22:01:46.555962  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:46.556018  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.559773  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:46.559836  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:46.585156  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:46.585167  882202 cri.go:89] found id: ""
	I1119 22:01:46.585174  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:46.585229  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.588747  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:46.588820  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:46.616214  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:46.616226  882202 cri.go:89] found id: ""
	I1119 22:01:46.616232  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:46.616287  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.620002  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:46.620064  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:46.645995  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:46.646007  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:46.646010  882202 cri.go:89] found id: ""
	I1119 22:01:46.646031  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:46.646100  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.649702  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:46.653050  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:46.653068  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:46.680030  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:46.680046  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:46.709515  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:46.709532  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:46.753164  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:46.753185  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:46.824168  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:46.824198  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:46.824210  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:46.858667  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:46.858686  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:46.888416  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:46.888433  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:46.915804  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:46.915820  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:46.946477  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:46.946495  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:47.003764  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:47.003791  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:47.031784  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:47.031799  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:47.111614  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:47.111633  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:47.213443  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:47.213463  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:47.324647  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:47.324670  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:47.353816  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:47.353834  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:47.369622  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:47.369641  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:47.429861  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:47.429890  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:49.960308  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:49.971173  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:49.971233  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:50.004406  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:50.004419  882202 cri.go:89] found id: ""
	I1119 22:01:50.004427  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:50.004515  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.013403  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:50.013488  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:50.042167  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:50.042178  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:50.042182  882202 cri.go:89] found id: ""
	I1119 22:01:50.042188  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:50.042245  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.046018  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.049526  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:50.049599  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:50.076050  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:50.076061  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:50.076065  882202 cri.go:89] found id: ""
	I1119 22:01:50.076072  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:50.076141  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.080188  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.083988  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:50.084055  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:50.111788  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:50.111800  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:50.111803  882202 cri.go:89] found id: ""
	I1119 22:01:50.111809  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:50.111863  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.115705  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.119125  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:50.119193  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:50.149025  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:50.149036  882202 cri.go:89] found id: ""
	I1119 22:01:50.149043  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:50.149098  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.152856  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:50.152926  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:50.179455  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:50.179466  882202 cri.go:89] found id: ""
	I1119 22:01:50.179474  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:50.179539  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.183975  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:50.184042  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:50.210327  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:50.210338  882202 cri.go:89] found id: ""
	I1119 22:01:50.210347  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:50.210398  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.213971  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:50.214033  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:50.239795  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:50.239806  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:50.239815  882202 cri.go:89] found id: ""
	I1119 22:01:50.239821  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:50.239874  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.243389  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:50.246656  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:50.246670  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:50.310990  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:50.310999  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:50.311010  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:50.344438  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:50.344457  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:50.389560  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:50.389579  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:50.415189  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:50.415210  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:50.441463  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:50.441480  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:50.523927  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:50.523949  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:50.539274  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:50.539293  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:50.664920  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:50.664949  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:50.694993  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:50.695011  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:50.755310  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:50.755331  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:50.781391  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:50.781406  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:50.813667  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:50.813684  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:50.840615  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:50.840633  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:50.867484  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:50.867501  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:50.964537  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:50.964583  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:51.004516  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:51.004541  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:53.538015  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:53.549790  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:53.549851  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:53.577852  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:53.577863  882202 cri.go:89] found id: ""
	I1119 22:01:53.577870  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:53.577926  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.581653  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:53.581717  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:53.607858  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:53.607870  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:53.607873  882202 cri.go:89] found id: ""
	I1119 22:01:53.607879  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:53.607937  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.611709  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.614918  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:53.614974  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:53.644265  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:53.644276  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:53.644280  882202 cri.go:89] found id: ""
	I1119 22:01:53.644286  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:53.644340  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.647971  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.651331  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:53.651386  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:53.676330  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:53.676341  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:53.676345  882202 cri.go:89] found id: ""
	I1119 22:01:53.676351  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:53.676415  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.680084  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.683411  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:53.683468  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:53.712731  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:53.712742  882202 cri.go:89] found id: ""
	I1119 22:01:53.712748  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:53.712801  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.716370  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:53.716434  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:53.741391  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:53.741401  882202 cri.go:89] found id: ""
	I1119 22:01:53.741407  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:53.741470  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.744961  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:53.745031  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:53.774797  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:53.774808  882202 cri.go:89] found id: ""
	I1119 22:01:53.774815  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:53.774888  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.778507  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:53.778567  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:53.805117  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:53.805129  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:53.805132  882202 cri.go:89] found id: ""
	I1119 22:01:53.805139  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:53.805193  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.808748  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:53.812021  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:53.812037  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:53.837938  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:53.837954  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:53.863858  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:53.863874  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:53.894756  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:53.894774  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:53.990635  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:53.990660  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:54.021879  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:54.021895  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:54.057313  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:54.057330  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:54.102462  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:54.102490  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:54.163482  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:54.163500  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:01:54.243938  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:54.243958  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:54.313964  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:54.313974  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:54.313987  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:54.341292  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:54.341309  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:54.368304  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:54.368321  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:54.403272  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:54.403290  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:54.418457  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:54.418474  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:54.540998  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:54.541018  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:54.568059  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:54.568083  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:57.096296  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:01:57.107276  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:01:57.107339  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:01:57.132419  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:57.132431  882202 cri.go:89] found id: ""
	I1119 22:01:57.132437  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:01:57.132493  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.136224  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:01:57.136285  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:01:57.178225  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:57.178236  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:57.178241  882202 cri.go:89] found id: ""
	I1119 22:01:57.178247  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:01:57.178302  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.181984  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.186262  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:01:57.186323  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:01:57.211427  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:57.211438  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:57.211441  882202 cri.go:89] found id: ""
	I1119 22:01:57.211447  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:01:57.211500  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.215292  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.218689  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:01:57.218748  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:01:57.245017  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:57.245028  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:57.245031  882202 cri.go:89] found id: ""
	I1119 22:01:57.245038  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:01:57.245098  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.248773  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.252028  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:01:57.252094  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:01:57.277252  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:57.277263  882202 cri.go:89] found id: ""
	I1119 22:01:57.277269  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:01:57.277332  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.280862  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:01:57.280919  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:01:57.308470  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:57.308482  882202 cri.go:89] found id: ""
	I1119 22:01:57.308489  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:01:57.308545  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.311992  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:01:57.312087  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:01:57.336870  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:57.336881  882202 cri.go:89] found id: ""
	I1119 22:01:57.336886  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:01:57.336939  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.340580  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:01:57.340657  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:01:57.366507  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:57.366518  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:57.366521  882202 cri.go:89] found id: ""
	I1119 22:01:57.366528  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:01:57.366579  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.370200  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:01:57.373635  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:01:57.373653  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:01:57.398972  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:01:57.398988  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:01:57.464871  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:01:57.464891  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:01:57.489537  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:01:57.489554  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:01:57.505418  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:01:57.505436  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:01:57.533015  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:01:57.533031  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:01:57.567418  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:01:57.567434  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:01:57.599851  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:01:57.599868  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:01:57.624866  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:01:57.624881  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:01:57.722900  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:01:57.722919  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:01:57.833472  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:01:57.833494  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:01:57.869248  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:01:57.869267  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:01:57.917154  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:01:57.917174  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:01:57.942490  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:01:57.942506  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:01:57.970964  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:01:57.970980  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:01:58.045630  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:01:58.045640  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:01:58.045652  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:01:58.074533  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:01:58.074549  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:00.653448  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:00.665087  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:00.665148  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:00.690999  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:00.691011  882202 cri.go:89] found id: ""
	I1119 22:02:00.691017  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:00.691086  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.695135  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:00.695196  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:00.722248  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:00.722260  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:00.722264  882202 cri.go:89] found id: ""
	I1119 22:02:00.722270  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:00.722329  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.726058  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.729724  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:00.729784  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:00.756195  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:00.756207  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:00.756211  882202 cri.go:89] found id: ""
	I1119 22:02:00.756217  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:00.756274  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.760091  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.763656  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:00.763716  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:00.790849  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:00.790886  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:00.790890  882202 cri.go:89] found id: ""
	I1119 22:02:00.790897  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:00.790955  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.795183  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.799879  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:00.799942  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:00.828066  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:00.828078  882202 cri.go:89] found id: ""
	I1119 22:02:00.828084  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:00.828141  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.832048  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:00.832113  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:00.858595  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:00.858608  882202 cri.go:89] found id: ""
	I1119 22:02:00.858618  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:00.858679  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.862665  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:00.862743  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:00.889775  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:00.889787  882202 cri.go:89] found id: ""
	I1119 22:02:00.889794  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:00.889854  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.893699  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:00.893765  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:00.921700  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:00.921712  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:00.921716  882202 cri.go:89] found id: ""
	I1119 22:02:00.921722  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:00.921787  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.925609  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:00.929230  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:00.929246  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:00.944628  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:00.944645  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:00.975506  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:00.975526  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:01.011368  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:01.011386  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:01.040985  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:01.041003  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:01.068388  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:01.068406  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:01.097584  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:01.097600  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:01.127718  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:01.127738  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:01.156328  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:01.156346  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:01.234183  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:01.234205  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:01.337150  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:01.337175  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:01.402409  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:01.402428  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:01.479294  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:01.479315  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:01.510888  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:01.510906  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:01.581151  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:01.581161  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:01.581185  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:01.713510  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:01.713559  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:01.753973  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:01.753992  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:04.298439  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:04.309086  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:04.309144  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:04.334324  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:04.334336  882202 cri.go:89] found id: ""
	I1119 22:02:04.334343  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:04.334403  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.338295  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:04.338358  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:04.365373  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:04.365384  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:04.365387  882202 cri.go:89] found id: ""
	I1119 22:02:04.365394  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:04.365450  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.369254  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.372914  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:04.372975  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:04.399523  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:04.399534  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:04.399538  882202 cri.go:89] found id: ""
	I1119 22:02:04.399545  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:04.399599  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.403158  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.406604  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:04.406680  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:04.432411  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:04.432422  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:04.432425  882202 cri.go:89] found id: ""
	I1119 22:02:04.432431  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:04.432500  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.436245  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.439946  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:04.440004  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:04.472734  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:04.472746  882202 cri.go:89] found id: ""
	I1119 22:02:04.472752  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:04.472804  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.476540  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:04.476603  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:04.502745  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:04.502756  882202 cri.go:89] found id: ""
	I1119 22:02:04.502763  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:04.502816  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.506416  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:04.506478  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:04.533341  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:04.533352  882202 cri.go:89] found id: ""
	I1119 22:02:04.533359  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:04.533413  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.536972  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:04.537030  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:04.567960  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:04.567971  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:04.567974  882202 cri.go:89] found id: ""
	I1119 22:02:04.567980  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:04.568039  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.571863  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:04.575456  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:04.575481  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:04.601520  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:04.601538  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:04.641644  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:04.641659  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:04.744463  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:04.744484  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:04.760675  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:04.760693  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:04.833005  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:04.833014  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:04.833027  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:04.860826  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:04.860842  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:04.940582  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:04.940604  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:05.059063  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:05.059084  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:05.093797  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:05.093814  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:05.121631  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:05.121648  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:05.149916  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:05.149934  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:05.177537  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:05.177554  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:05.217315  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:05.217333  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:05.262307  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:05.262326  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:05.289120  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:05.289136  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:05.355109  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:05.355131  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:07.885912  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:07.896946  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:07.897003  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:07.927450  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:07.927461  882202 cri.go:89] found id: ""
	I1119 22:02:07.927468  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:07.927525  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:07.931635  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:07.931696  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:07.958539  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:07.958550  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:07.958554  882202 cri.go:89] found id: ""
	I1119 22:02:07.958560  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:07.958614  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:07.962191  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:07.965574  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:07.965654  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:07.992404  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:07.992415  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:07.992419  882202 cri.go:89] found id: ""
	I1119 22:02:07.992424  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:07.992477  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:07.996091  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:07.999797  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:07.999873  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:08.031122  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:08.031134  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:08.031138  882202 cri.go:89] found id: ""
	I1119 22:02:08.031145  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:08.031205  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:08.035543  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:08.039250  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:08.039327  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:08.066933  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:08.066945  882202 cri.go:89] found id: ""
	I1119 22:02:08.066952  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:08.067017  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:08.071133  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:08.071203  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:08.099565  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:08.099591  882202 cri.go:89] found id: ""
	I1119 22:02:08.099602  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:08.099686  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:08.104539  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:08.104601  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:08.133573  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:08.133589  882202 cri.go:89] found id: ""
	I1119 22:02:08.133596  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:08.133682  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:08.137316  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:08.137378  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:08.168135  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:08.168147  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:08.168150  882202 cri.go:89] found id: ""
	I1119 22:02:08.168156  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:08.168213  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:08.172507  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:08.176435  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:08.176454  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:08.274440  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:08.274460  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:08.344859  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:08.344874  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:08.344885  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:08.458522  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:08.458543  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:08.500670  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:08.500688  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:08.529011  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:08.529027  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:08.555699  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:08.555723  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:08.584646  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:08.584662  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:08.600417  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:08.600432  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:08.663763  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:08.663784  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:08.689700  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:08.689723  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:08.715056  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:08.715073  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:08.743131  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:08.743147  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:08.824500  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:08.824520  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:08.858045  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:08.858062  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:08.915285  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:08.915305  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:08.942496  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:08.942512  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:11.479016  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:11.490184  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:11.490255  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:11.516414  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:11.516425  882202 cri.go:89] found id: ""
	I1119 22:02:11.516431  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:11.516492  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.520065  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:11.520124  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:11.547622  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:11.547633  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:11.547636  882202 cri.go:89] found id: ""
	I1119 22:02:11.547643  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:11.547695  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.551407  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.555143  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:11.555203  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:11.581111  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:11.581122  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:11.581125  882202 cri.go:89] found id: ""
	I1119 22:02:11.581137  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:11.581191  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.584849  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.588250  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:11.588306  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:11.612322  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:11.612334  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:11.612337  882202 cri.go:89] found id: ""
	I1119 22:02:11.612343  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:11.612397  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.615937  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.619171  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:11.619231  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:11.648452  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:11.648463  882202 cri.go:89] found id: ""
	I1119 22:02:11.648481  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:11.648537  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.652039  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:11.652097  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:11.678292  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:11.678309  882202 cri.go:89] found id: ""
	I1119 22:02:11.678315  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:11.678368  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.681839  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:11.681913  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:11.714003  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:11.714026  882202 cri.go:89] found id: ""
	I1119 22:02:11.714033  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:11.714086  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.717614  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:11.717681  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:11.743463  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:11.743474  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:11.743477  882202 cri.go:89] found id: ""
	I1119 22:02:11.743483  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:11.743551  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.747070  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:11.750547  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:11.750561  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:11.813095  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:11.813115  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:11.847959  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:11.847977  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:11.964973  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:11.964993  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:12.005596  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:12.005632  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:12.042248  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:12.042266  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:12.077006  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:12.077022  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:12.154229  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:12.154250  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:12.179708  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:12.179724  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:12.206313  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:12.206329  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:12.240953  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:12.240970  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:12.342179  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:12.342199  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:12.419660  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:12.419669  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:12.419680  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:12.473475  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:12.473498  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:12.502845  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:12.502899  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:12.531193  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:12.531209  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:12.557023  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:12.557038  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:15.073436  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:15.085837  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:15.085897  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:15.114604  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:15.114615  882202 cri.go:89] found id: ""
	I1119 22:02:15.114621  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:15.114680  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.119042  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:15.119131  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:15.155109  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:15.155121  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:15.155124  882202 cri.go:89] found id: ""
	I1119 22:02:15.155130  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:15.155185  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.159049  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.162624  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:15.162689  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:15.189032  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:15.189045  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:15.189054  882202 cri.go:89] found id: ""
	I1119 22:02:15.189060  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:15.189118  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.193207  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.197191  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:15.197253  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:15.226523  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:15.226534  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:15.226544  882202 cri.go:89] found id: ""
	I1119 22:02:15.226550  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:15.226650  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.230761  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.234461  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:15.234521  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:15.260201  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:15.260212  882202 cri.go:89] found id: ""
	I1119 22:02:15.260219  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:15.260273  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.263918  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:15.263978  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:15.290439  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:15.290450  882202 cri.go:89] found id: ""
	I1119 22:02:15.290457  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:15.290516  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.294145  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:15.294202  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:15.322006  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:15.322017  882202 cri.go:89] found id: ""
	I1119 22:02:15.322030  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:15.322086  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.325782  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:15.325841  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:15.353799  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:15.353810  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:15.353813  882202 cri.go:89] found id: ""
	I1119 22:02:15.353819  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:15.353872  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.357590  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:15.361073  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:15.361087  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:15.406726  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:15.406745  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:15.434496  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:15.434513  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:15.504010  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:15.504028  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:15.539104  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:15.539121  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:15.570467  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:15.570483  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:15.601887  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:15.601903  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:15.642472  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:15.642490  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:15.672645  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:15.672662  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:15.775953  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:15.775982  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:15.894037  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:15.894061  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:15.919946  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:15.919963  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:15.947154  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:15.947170  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:16.021029  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:16.021041  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:16.021053  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:16.055457  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:16.055474  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:16.132949  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:16.132971  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:16.148446  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:16.148463  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:18.683175  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:18.694040  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:18.694099  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:18.719840  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:18.719851  882202 cri.go:89] found id: ""
	I1119 22:02:18.719858  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:18.719921  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.723568  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:18.723635  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:18.748498  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:18.748511  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:18.748514  882202 cri.go:89] found id: ""
	I1119 22:02:18.748521  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:18.748574  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.752265  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.755953  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:18.756014  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:18.782558  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:18.782569  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:18.782573  882202 cri.go:89] found id: ""
	I1119 22:02:18.782579  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:18.782634  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.787211  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.790562  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:18.790622  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:18.818082  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:18.818094  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:18.818098  882202 cri.go:89] found id: ""
	I1119 22:02:18.818104  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:18.818160  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.821834  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.825361  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:18.825421  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:18.851002  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:18.851014  882202 cri.go:89] found id: ""
	I1119 22:02:18.851020  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:18.851091  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.854630  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:18.854690  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:18.882447  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:18.882458  882202 cri.go:89] found id: ""
	I1119 22:02:18.882465  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:18.882525  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.886150  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:18.886212  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:18.912149  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:18.912160  882202 cri.go:89] found id: ""
	I1119 22:02:18.912167  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:18.912220  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.915825  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:18.915887  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:18.941991  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:18.942004  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:18.942007  882202 cri.go:89] found id: ""
	I1119 22:02:18.942013  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:18.942073  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.946017  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:18.949581  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:18.949596  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:18.974673  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:18.974690  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:19.012189  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:19.012207  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:19.051244  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:19.051264  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:19.076661  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:19.076677  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:19.154041  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:19.154059  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:19.250291  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:19.250311  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:19.266658  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:19.266676  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:19.310319  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:19.310338  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:19.337710  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:19.337725  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:19.364189  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:19.364207  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:19.479722  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:19.479742  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:19.515103  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:19.515121  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:19.543367  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:19.543388  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:19.573274  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:19.573296  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:19.640052  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:19.640062  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:19.640073  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:19.668758  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:19.668774  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:22.233793  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:22.245136  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:22.245194  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:22.273905  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:22.273916  882202 cri.go:89] found id: ""
	I1119 22:02:22.273922  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:22.273976  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.277768  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:22.277831  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:22.304542  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:22.304553  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:22.304557  882202 cri.go:89] found id: ""
	I1119 22:02:22.304563  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:22.304622  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.308334  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.311867  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:22.311938  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:22.339533  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:22.339544  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:22.339547  882202 cri.go:89] found id: ""
	I1119 22:02:22.339554  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:22.339608  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.343243  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.346772  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:22.346832  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:22.373257  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:22.373268  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:22.373271  882202 cri.go:89] found id: ""
	I1119 22:02:22.373277  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:22.373332  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.376964  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.380547  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:22.380620  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:22.407825  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:22.407836  882202 cri.go:89] found id: ""
	I1119 22:02:22.407843  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:22.407902  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.411687  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:22.411764  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:22.451211  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:22.451222  882202 cri.go:89] found id: ""
	I1119 22:02:22.451229  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:22.451286  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.454964  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:22.455031  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:22.480889  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:22.480909  882202 cri.go:89] found id: ""
	I1119 22:02:22.480916  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:22.480985  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.484574  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:22.484635  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:22.511902  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:22.511914  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:22.511917  882202 cri.go:89] found id: ""
	I1119 22:02:22.511923  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:22.511982  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.515821  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:22.519391  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:22.519405  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:22.548000  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:22.548017  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:22.575243  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:22.575262  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:22.675625  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:22.675650  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:22.704262  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:22.704279  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:22.740973  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:22.740992  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:22.823709  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:22.823732  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:22.896999  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:22.897009  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:22.897021  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:22.942546  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:22.942566  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:23.009227  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:23.009249  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:23.040729  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:23.040746  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:23.071802  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:23.071817  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:23.105664  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:23.105681  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:23.121483  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:23.121500  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:23.243394  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:23.243415  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:23.277281  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:23.277298  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:23.303915  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:23.303930  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:25.829875  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:25.840804  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:25.840863  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:25.866705  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:25.866716  882202 cri.go:89] found id: ""
	I1119 22:02:25.866723  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:25.866783  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:25.870677  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:25.870739  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:25.896609  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:25.896621  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:25.896625  882202 cri.go:89] found id: ""
	I1119 22:02:25.896631  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:25.896688  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:25.900519  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:25.904043  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:25.904104  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:25.929832  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:25.929845  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:25.929849  882202 cri.go:89] found id: ""
	I1119 22:02:25.929856  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:25.929928  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:25.933728  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:25.937791  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:25.937862  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:25.963957  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:25.963968  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:25.963971  882202 cri.go:89] found id: ""
	I1119 22:02:25.963977  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:25.964039  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:25.967677  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:25.971046  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:25.971122  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:25.996303  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:25.996314  882202 cri.go:89] found id: ""
	I1119 22:02:25.996321  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:25.996374  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:26.000183  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:26.000250  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:26.031245  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:26.031256  882202 cri.go:89] found id: ""
	I1119 22:02:26.031263  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:26.031320  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:26.035251  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:26.035321  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:26.063605  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:26.063617  882202 cri.go:89] found id: ""
	I1119 22:02:26.063631  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:26.063690  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:26.067619  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:26.067681  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:26.097001  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:26.097013  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:26.097016  882202 cri.go:89] found id: ""
	I1119 22:02:26.097023  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:26.097080  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:26.101003  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:26.104789  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:26.104804  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:26.143355  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:26.143374  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:26.245833  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:26.245856  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:26.358213  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:26.358237  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:26.383736  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:26.383753  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:26.409962  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:26.409978  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:26.436188  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:26.436207  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:26.474436  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:26.474457  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:26.515580  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:26.515597  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:26.530731  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:26.530755  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:26.597961  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:26.597971  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:26.597981  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:26.624167  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:26.624186  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:26.708327  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:26.708346  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:26.756620  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:26.756646  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:26.833977  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:26.833998  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:26.867277  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:26.867331  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:26.892979  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:26.892995  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:29.423322  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:29.434488  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:29.434545  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:29.467126  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:29.467137  882202 cri.go:89] found id: ""
	I1119 22:02:29.467144  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:29.467196  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.470858  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:29.470940  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:29.496133  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:29.496145  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:29.496148  882202 cri.go:89] found id: ""
	I1119 22:02:29.496155  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:29.496211  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.499964  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.503486  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:29.503548  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:29.529204  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:29.529218  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:29.529222  882202 cri.go:89] found id: ""
	I1119 22:02:29.529228  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:29.529292  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.532903  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.536205  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:29.536265  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:29.565549  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:29.565560  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:29.565563  882202 cri.go:89] found id: ""
	I1119 22:02:29.565569  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:29.565623  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.569342  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.572736  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:29.572794  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:29.598541  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:29.598552  882202 cri.go:89] found id: ""
	I1119 22:02:29.598559  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:29.598611  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.602124  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:29.602179  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:29.627814  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:29.627825  882202 cri.go:89] found id: ""
	I1119 22:02:29.627832  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:29.627887  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.631404  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:29.631475  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:29.660420  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:29.660432  882202 cri.go:89] found id: ""
	I1119 22:02:29.660438  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:29.660499  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.664167  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:29.664227  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:29.688930  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:29.688942  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:29.688946  882202 cri.go:89] found id: ""
	I1119 22:02:29.688952  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:29.689007  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.692749  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:29.696259  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:29.696273  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:29.727184  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:29.727202  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:29.825009  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:29.825029  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:29.939276  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:29.939296  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:30.020535  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:30.020549  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:30.020562  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:30.056323  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:30.056342  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:30.088981  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:30.088999  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:30.118738  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:30.118754  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:30.146570  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:30.146587  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:30.162409  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:30.162427  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:30.230055  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:30.230075  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:30.255605  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:30.255622  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:30.333546  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:30.333567  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:30.381905  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:30.381922  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:30.428099  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:30.428125  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:30.456844  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:30.456860  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:30.484769  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:30.484785  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:33.021553  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:33.033449  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:33.033518  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:33.061255  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:33.061266  882202 cri.go:89] found id: ""
	I1119 22:02:33.061273  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:33.061333  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.065228  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:33.065287  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:33.090655  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:33.090666  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:33.090669  882202 cri.go:89] found id: ""
	I1119 22:02:33.090675  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:33.090732  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.094458  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.098199  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:33.098261  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:33.124610  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:33.124622  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:33.124625  882202 cri.go:89] found id: ""
	I1119 22:02:33.124632  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:33.124685  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.128579  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.132045  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:33.132116  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:33.162523  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:33.162535  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:33.162538  882202 cri.go:89] found id: ""
	I1119 22:02:33.162544  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:33.162599  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.166281  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.169740  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:33.169800  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:33.196581  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:33.196592  882202 cri.go:89] found id: ""
	I1119 22:02:33.196599  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:33.196654  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.200510  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:33.200573  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:33.231574  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:33.231587  882202 cri.go:89] found id: ""
	I1119 22:02:33.231594  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:33.231652  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.235596  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:33.235659  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:33.262155  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:33.262166  882202 cri.go:89] found id: ""
	I1119 22:02:33.262173  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:33.262226  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.265864  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:33.265935  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:33.292897  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:33.292909  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:33.292912  882202 cri.go:89] found id: ""
	I1119 22:02:33.292918  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:33.292973  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.296746  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:33.300385  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:33.300399  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:33.370530  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:33.370541  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:33.370552  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:33.485394  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:33.485414  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:33.515562  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:33.515580  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:33.589810  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:33.589828  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:33.614428  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:33.614445  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:33.644407  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:33.644424  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:33.725992  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:33.726012  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:33.761817  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:33.761833  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:33.796731  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:33.796748  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:33.824429  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:33.824445  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:33.867627  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:33.867646  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:33.971624  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:33.971646  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:33.987857  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:33.987883  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:34.033310  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:34.033329  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:34.064308  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:34.064325  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:34.094149  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:34.094166  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:36.622270  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:36.633417  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:36.633475  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:36.661396  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:36.661407  882202 cri.go:89] found id: ""
	I1119 22:02:36.661414  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:36.661470  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.665353  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:36.665420  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:36.690890  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:36.690902  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:36.690905  882202 cri.go:89] found id: ""
	I1119 22:02:36.690911  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:36.690968  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.694696  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.698222  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:36.698284  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:36.725211  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:36.725223  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:36.725226  882202 cri.go:89] found id: ""
	I1119 22:02:36.725232  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:36.725286  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.728870  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.732414  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:36.732472  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:36.758281  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:36.758292  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:36.758296  882202 cri.go:89] found id: ""
	I1119 22:02:36.758301  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:36.758354  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.761987  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.765451  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:36.765519  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:36.795759  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:36.795770  882202 cri.go:89] found id: ""
	I1119 22:02:36.795776  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:36.795835  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.799641  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:36.799700  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:36.824534  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:36.824545  882202 cri.go:89] found id: ""
	I1119 22:02:36.824551  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:36.824603  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.828370  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:36.828438  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:36.855275  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:36.855287  882202 cri.go:89] found id: ""
	I1119 22:02:36.855294  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:36.855353  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.858927  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:36.858988  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:36.884269  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:36.884289  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:36.884293  882202 cri.go:89] found id: ""
	I1119 22:02:36.884299  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:36.884354  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.888139  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:36.891567  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:36.891582  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:36.916805  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:36.916822  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:36.956934  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:36.956952  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:36.983160  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:36.983176  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:37.060536  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:37.060556  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:37.127878  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:37.127899  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:37.177137  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:37.177156  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:37.208066  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:37.208090  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:37.234943  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:37.234968  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:37.281761  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:37.281780  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:37.381930  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:37.381951  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:37.398526  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:37.398547  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:37.436320  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:37.436338  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:37.466753  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:37.466769  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:37.493673  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:37.493690  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:37.572374  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:37.572383  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:37.572398  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:37.701038  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:37.701059  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:40.229698  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:40.241200  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:40.241275  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:40.273735  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:40.273746  882202 cri.go:89] found id: ""
	I1119 22:02:40.273753  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:40.273810  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.277527  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:40.277597  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:40.307870  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:40.307881  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:40.307886  882202 cri.go:89] found id: ""
	I1119 22:02:40.307892  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:40.307946  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.311831  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.315607  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:40.315668  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:40.341805  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:40.341817  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:40.341821  882202 cri.go:89] found id: ""
	I1119 22:02:40.341827  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:40.341884  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.345673  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.349338  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:40.349446  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:40.376770  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:40.376782  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:40.376785  882202 cri.go:89] found id: ""
	I1119 22:02:40.376791  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:40.376845  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.380551  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.384112  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:40.384172  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:40.410749  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:40.410760  882202 cri.go:89] found id: ""
	I1119 22:02:40.410767  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:40.410828  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.414510  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:40.414573  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:40.453206  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:40.453218  882202 cri.go:89] found id: ""
	I1119 22:02:40.453234  882202 logs.go:282] 1 containers: [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:40.453286  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.457649  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:40.457716  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:40.495820  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:40.495832  882202 cri.go:89] found id: ""
	I1119 22:02:40.495839  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:40.495892  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.500328  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:40.500406  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:40.533333  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:40.533345  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:40.533349  882202 cri.go:89] found id: ""
	I1119 22:02:40.533355  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:40.533409  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.537443  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:40.541957  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:40.541971  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:40.573372  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:40.573390  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:40.652088  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:40.652117  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:40.680175  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:40.680193  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:40.707458  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:40.707476  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:40.814778  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:40.814798  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:40.911632  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:40.911643  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:40.911664  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:41.088928  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:41.088956  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:41.140792  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:41.140814  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:41.176499  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:41.176526  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:41.217081  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:41.217097  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:41.253004  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:41.253020  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:41.304859  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:41.304876  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:41.389392  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:41.389411  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:41.424929  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:41.424946  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:41.439775  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:41.439791  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:41.482619  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:41.482639  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:44.047014  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:44.062780  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:44.062842  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:44.090526  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:44.090537  882202 cri.go:89] found id: ""
	I1119 22:02:44.090544  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:44.090599  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.094294  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:44.094361  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:44.119866  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:44.119877  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:44.119881  882202 cri.go:89] found id: ""
	I1119 22:02:44.119887  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:44.119941  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.123746  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.127286  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:44.127348  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:44.158411  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:44.158422  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:44.158425  882202 cri.go:89] found id: ""
	I1119 22:02:44.158431  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:44.158495  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.162227  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.165862  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:44.165926  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:44.192725  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:44.192737  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:44.192740  882202 cri.go:89] found id: ""
	I1119 22:02:44.192746  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:44.192801  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.196543  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.200259  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:44.200319  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:44.225464  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:44.225476  882202 cri.go:89] found id: ""
	I1119 22:02:44.225483  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:44.225538  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.229237  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:44.229296  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:44.255913  882202 cri.go:89] found id: "610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:44.255925  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:44.255929  882202 cri.go:89] found id: ""
	I1119 22:02:44.255936  882202 logs.go:282] 2 containers: [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:44.255995  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.259695  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.263414  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:44.263480  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:44.290143  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:44.290155  882202 cri.go:89] found id: ""
	I1119 22:02:44.290162  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:44.290231  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.293786  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:44.293854  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:44.324148  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:44.324159  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:44.324163  882202 cri.go:89] found id: ""
	I1119 22:02:44.324170  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:44.324226  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.328016  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:44.331453  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:44.331467  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:44.356420  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:44.356436  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:44.390083  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:44.390101  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:44.405470  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:44.405487  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:44.525734  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:44.525756  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:44.575118  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:44.575137  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:44.603843  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:44.603859  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:44.630560  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:44.630577  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:44.736766  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:44.736788  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:44.805357  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:44.805370  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:44.805381  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:44.885207  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:44.885228  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:44.915401  882202 logs.go:123] Gathering logs for kube-controller-manager [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5] ...
	I1119 22:02:44.915419  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:44.941464  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:44.941484  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:44.968265  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:44.968282  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:45.023751  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:45.023777  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:45.078474  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:45.078500  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:45.138786  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:45.138810  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:45.212512  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:45.212533  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:47.802989  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:47.813947  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:47.814004  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:47.839718  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:47.839729  882202 cri.go:89] found id: ""
	I1119 22:02:47.839735  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:47.839792  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.843725  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:47.843791  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:47.869631  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:47.869642  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:47.869646  882202 cri.go:89] found id: ""
	I1119 22:02:47.869652  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:47.869727  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.873582  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.877009  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:47.877070  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:47.905580  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:47.905592  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:47.905594  882202 cri.go:89] found id: ""
	I1119 22:02:47.905601  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:47.905657  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.909537  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.913182  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:47.913244  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:47.939136  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:47.939148  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:47.939151  882202 cri.go:89] found id: ""
	I1119 22:02:47.939158  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:47.939217  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.943043  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.946498  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:47.946559  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:47.973269  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:47.973290  882202 cri.go:89] found id: ""
	I1119 22:02:47.973299  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:47.973354  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:47.976934  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:47.977007  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:48.006125  882202 cri.go:89] found id: "610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:48.006137  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:48.006140  882202 cri.go:89] found id: ""
	I1119 22:02:48.006148  882202 logs.go:282] 2 containers: [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:48.006220  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:48.011613  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:48.015727  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:48.015795  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:48.045917  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:48.045929  882202 cri.go:89] found id: ""
	I1119 22:02:48.045936  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:48.045997  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:48.049962  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:48.050032  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:48.084743  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:48.084755  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:48.084759  882202 cri.go:89] found id: ""
	I1119 22:02:48.084766  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:48.084825  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:48.088687  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:48.092498  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:48.092515  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:48.123895  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:48.123911  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:48.151522  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:48.151538  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:48.249226  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:48.249247  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:48.277005  882202 logs.go:123] Gathering logs for kube-controller-manager [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5] ...
	I1119 22:02:48.277024  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:48.304004  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:48.304020  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:48.319751  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:48.319769  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:48.347040  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:48.347059  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:48.417293  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:48.417303  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:48.417314  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:48.464990  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:48.465008  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:48.527085  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:48.527107  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:48.600331  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:48.600354  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:48.628615  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:48.628639  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:48.661961  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:48.661978  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:48.776647  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:48.776667  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:48.806286  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:48.806302  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:48.844767  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:48.844786  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:48.871265  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:48.871281  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:51.448765  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:51.460375  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:51.460438  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:51.491341  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:51.491352  882202 cri.go:89] found id: ""
	I1119 22:02:51.491359  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:51.491413  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.495007  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:51.495063  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:51.520966  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:51.520977  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:51.520981  882202 cri.go:89] found id: ""
	I1119 22:02:51.520987  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:51.521054  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.525081  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.528396  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:51.528455  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:51.553771  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:51.553782  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:51.553796  882202 cri.go:89] found id: ""
	I1119 22:02:51.553803  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:51.553870  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.557907  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.561355  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:51.561422  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:51.588157  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:51.588169  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:51.588173  882202 cri.go:89] found id: ""
	I1119 22:02:51.588179  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:51.588237  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.592080  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.595622  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:51.595678  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:51.625661  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:51.625685  882202 cri.go:89] found id: ""
	I1119 22:02:51.625692  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:51.625761  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.629343  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:51.629401  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:51.655395  882202 cri.go:89] found id: "610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:51.655407  882202 cri.go:89] found id: "efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:51.655410  882202 cri.go:89] found id: ""
	I1119 22:02:51.655416  882202 logs.go:282] 2 containers: [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712]
	I1119 22:02:51.655472  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.659280  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.662713  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:51.662774  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:51.688649  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:51.688661  882202 cri.go:89] found id: ""
	I1119 22:02:51.688667  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:51.688728  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.693030  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:51.693096  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:51.721702  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:51.721714  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:51.721718  882202 cri.go:89] found id: ""
	I1119 22:02:51.721725  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:51.721780  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.725508  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:51.729115  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:51.729131  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:51.776279  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:51.776298  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:51.858066  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:51.858086  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:51.889774  882202 logs.go:123] Gathering logs for kube-controller-manager [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5] ...
	I1119 22:02:51.889792  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:51.915195  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:51.915214  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:51.951870  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:51.951889  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:51.980765  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:51.980782  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:52.082661  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:52.082682  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:52.098177  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:52.098196  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:52.171855  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:52.171863  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:52.171873  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:52.197158  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:52.197174  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:52.226751  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:52.226771  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:52.253512  882202 logs.go:123] Gathering logs for kube-controller-manager [efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712] ...
	I1119 22:02:52.253529  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 efda1985471417518e502fe74bf3097486847281b2b97b8ba2ad4f72cea6f712"
	I1119 22:02:52.280464  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:52.280482  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:52.305849  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:52.305866  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:52.434007  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:52.434029  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:52.515636  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:52.515655  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:52.555703  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:52.555721  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:55.112748  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:55.123736  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:55.123794  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:55.154523  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:55.154535  882202 cri.go:89] found id: ""
	I1119 22:02:55.154542  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:55.154600  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.158518  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:55.158595  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:55.186587  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:55.186598  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:55.186603  882202 cri.go:89] found id: ""
	I1119 22:02:55.186610  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:55.186664  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.190417  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.194321  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:55.194392  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:55.220691  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:55.220703  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:55.220706  882202 cri.go:89] found id: ""
	I1119 22:02:55.220713  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:55.220783  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.224537  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.228081  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:55.228144  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:55.252456  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:55.252468  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:55.252471  882202 cri.go:89] found id: ""
	I1119 22:02:55.252478  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:55.252531  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.256199  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.259612  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:55.259671  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:55.285488  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:55.285500  882202 cri.go:89] found id: ""
	I1119 22:02:55.285506  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:55.285571  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.289338  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:55.289397  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:55.316612  882202 cri.go:89] found id: "610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:55.316624  882202 cri.go:89] found id: ""
	I1119 22:02:55.316630  882202 logs.go:282] 1 containers: [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5]
	I1119 22:02:55.316684  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.320481  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:55.320557  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:55.346078  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:55.346089  882202 cri.go:89] found id: ""
	I1119 22:02:55.346096  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:55.346159  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.349893  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:55.349951  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:55.375622  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:55.375633  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:55.375637  882202 cri.go:89] found id: ""
	I1119 22:02:55.375652  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:55.375707  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.379355  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:55.383754  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:55.383768  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:55.480477  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:55.480498  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:55.593716  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:55.593736  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:55.662760  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:55.662781  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:55.677796  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:55.677819  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:55.724780  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:55.724798  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:55.752488  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:55.752504  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:55.777478  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:55.777493  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:55.846135  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:55.846144  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:55.846155  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:55.872087  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:55.872104  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:55.898156  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:55.898171  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:55.973849  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:55.973867  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:56.012760  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:56.012780  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:56.046248  882202 logs.go:123] Gathering logs for kube-controller-manager [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5] ...
	I1119 22:02:56.046266  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:56.073823  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:56.073840  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:56.100451  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:56.100468  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:56.138140  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:56.138158  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:02:58.670930  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:02:58.682061  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:02:58.682121  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:02:58.708426  882202 cri.go:89] found id: "624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:58.708437  882202 cri.go:89] found id: ""
	I1119 22:02:58.708443  882202 logs.go:282] 1 containers: [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc]
	I1119 22:02:58.708503  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.712290  882202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:02:58.712350  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:02:58.743530  882202 cri.go:89] found id: "79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:58.743542  882202 cri.go:89] found id: "fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:58.743545  882202 cri.go:89] found id: ""
	I1119 22:02:58.743552  882202 logs.go:282] 2 containers: [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3]
	I1119 22:02:58.743610  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.747505  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.751098  882202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:02:58.751167  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:02:58.777386  882202 cri.go:89] found id: "41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:58.777397  882202 cri.go:89] found id: "73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:58.777400  882202 cri.go:89] found id: ""
	I1119 22:02:58.777406  882202 logs.go:282] 2 containers: [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604]
	I1119 22:02:58.777464  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.781199  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.784629  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:02:58.784689  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:02:58.810913  882202 cri.go:89] found id: "60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:58.810925  882202 cri.go:89] found id: "6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:58.810928  882202 cri.go:89] found id: ""
	I1119 22:02:58.810935  882202 logs.go:282] 2 containers: [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5]
	I1119 22:02:58.810992  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.814629  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.817932  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:02:58.818033  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:02:58.844563  882202 cri.go:89] found id: "603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:58.844580  882202 cri.go:89] found id: ""
	I1119 22:02:58.844587  882202 logs.go:282] 1 containers: [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238]
	I1119 22:02:58.844642  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.848459  882202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:02:58.848532  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:02:58.878991  882202 cri.go:89] found id: "610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:58.879003  882202 cri.go:89] found id: ""
	I1119 22:02:58.879009  882202 logs.go:282] 1 containers: [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5]
	I1119 22:02:58.879070  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.882747  882202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:02:58.882807  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:02:58.910005  882202 cri.go:89] found id: "3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:58.910017  882202 cri.go:89] found id: ""
	I1119 22:02:58.910023  882202 logs.go:282] 1 containers: [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44]
	I1119 22:02:58.910083  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.913818  882202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:02:58.913879  882202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:02:58.942946  882202 cri.go:89] found id: "1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:58.942957  882202 cri.go:89] found id: "d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:58.942960  882202 cri.go:89] found id: ""
	I1119 22:02:58.942966  882202 logs.go:282] 2 containers: [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2]
	I1119 22:02:58.943030  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.946893  882202 ssh_runner.go:195] Run: which crictl
	I1119 22:02:58.950448  882202 logs.go:123] Gathering logs for kube-apiserver [624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc] ...
	I1119 22:02:58.950463  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624b3f861ed032622d12b39847c618ea47be59e414cf80efd937dba90b9239fc"
	I1119 22:02:59.064522  882202 logs.go:123] Gathering logs for etcd [79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3] ...
	I1119 22:02:59.064545  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 79ab34983aa51908b6051a57913b7da8547f895807a364c1f9c47c0495f42dc3"
	I1119 22:02:59.099828  882202 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:02:59.099850  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:02:59.180831  882202 logs.go:123] Gathering logs for kubelet ...
	I1119 22:02:59.180851  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:02:59.286885  882202 logs.go:123] Gathering logs for dmesg ...
	I1119 22:02:59.286906  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:02:59.302694  882202 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:02:59.302710  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:02:59.372946  882202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:02:59.372955  882202 logs.go:123] Gathering logs for etcd [fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3] ...
	I1119 22:02:59.372971  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fa220941986cfe64c545e8d75c939f6b906a022818f9452672a21630e59099b3"
	I1119 22:02:59.427362  882202 logs.go:123] Gathering logs for kube-proxy [603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238] ...
	I1119 22:02:59.427381  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 603a88e453e7cdcc2e8fe8daf85c8d5a2b3c00c964dd3561461b60b43f1dc238"
	I1119 22:02:59.461049  882202 logs.go:123] Gathering logs for kindnet [3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44] ...
	I1119 22:02:59.461065  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3fb282f49203bb89f7ff3ef2f4ae5c161bb70252b4f45e95429e9370ea46fb44"
	I1119 22:02:59.486449  882202 logs.go:123] Gathering logs for coredns [41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e] ...
	I1119 22:02:59.486465  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 41bf09d4e1ad746bc0b5028f0265968e13a3d6d96ea9dc30b43fa5367cfbe07e"
	I1119 22:02:59.512146  882202 logs.go:123] Gathering logs for kube-scheduler [60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995] ...
	I1119 22:02:59.512163  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 60dfa2c9b93f9a3df57cbd774cc0bacfa1cc274aac2ac020ce1180c88edee995"
	I1119 22:02:59.579117  882202 logs.go:123] Gathering logs for kube-scheduler [6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5] ...
	I1119 22:02:59.579138  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6a8b535fec0b9960fbe3dcd9a2844a896fd7c5f873053b31a0c32831ce81ecd5"
	I1119 22:02:59.605225  882202 logs.go:123] Gathering logs for storage-provisioner [1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b] ...
	I1119 22:02:59.605242  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1bac5f8aa391613b726524344195384dddd1ee9f356b107396e268c8216fd18b"
	I1119 22:02:59.645053  882202 logs.go:123] Gathering logs for coredns [73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604] ...
	I1119 22:02:59.645076  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 73c21996e9a6f14349d77cd343a6a925c03958d141440b0277716f39bee81604"
	I1119 22:02:59.676169  882202 logs.go:123] Gathering logs for kube-controller-manager [610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5] ...
	I1119 22:02:59.676185  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 610e5cf9d10a147e45848624a56322a93105b66930ef4a82f022bc006f04dab5"
	I1119 22:02:59.702717  882202 logs.go:123] Gathering logs for storage-provisioner [d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2] ...
	I1119 22:02:59.702733  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8860842296fb0ae11fab15bae096f047c6abcbccf4a0fbd8a6a7a2ca48391a2"
	I1119 22:02:59.732179  882202 logs.go:123] Gathering logs for container status ...
	I1119 22:02:59.732194  882202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:03:02.274396  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:03:02.286259  882202 kubeadm.go:602] duration metric: took 4m29.820036388s to restartPrimaryControlPlane
	W1119 22:03:02.286316  882202 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1119 22:03:02.286400  882202 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1119 22:03:04.321504  882202 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.035082402s)
	I1119 22:03:04.321566  882202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:03:04.335051  882202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:03:04.342978  882202 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:03:04.343038  882202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:03:04.350820  882202 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:03:04.350830  882202 kubeadm.go:158] found existing configuration files:
	
	I1119 22:03:04.350907  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1119 22:03:04.358598  882202 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:03:04.358655  882202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:03:04.366236  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1119 22:03:04.374016  882202 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:03:04.374078  882202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:03:04.381524  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1119 22:03:04.389019  882202 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:03:04.389076  882202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:03:04.396656  882202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1119 22:03:04.404544  882202 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:03:04.404597  882202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:03:04.411716  882202 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:03:04.449622  882202 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:03:04.449869  882202 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:03:04.471444  882202 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:03:04.471512  882202 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:03:04.471548  882202 kubeadm.go:319] OS: Linux
	I1119 22:03:04.471603  882202 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:03:04.471654  882202 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:03:04.471703  882202 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:03:04.471752  882202 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:03:04.471801  882202 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:03:04.471850  882202 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:03:04.471895  882202 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:03:04.471945  882202 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:03:04.471992  882202 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:03:04.543959  882202 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:03:04.544067  882202 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:03:04.544162  882202 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:03:04.555306  882202 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:03:04.563020  882202 out.go:252]   - Generating certificates and keys ...
	I1119 22:03:04.563111  882202 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:03:04.563177  882202 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:03:04.563264  882202 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1119 22:03:04.563328  882202 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1119 22:03:04.563400  882202 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1119 22:03:04.563465  882202 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1119 22:03:04.563536  882202 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1119 22:03:04.563607  882202 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1119 22:03:04.563699  882202 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1119 22:03:04.563775  882202 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1119 22:03:04.563818  882202 kubeadm.go:319] [certs] Using the existing "sa" key
	I1119 22:03:04.563877  882202 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:03:04.850204  882202 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:03:05.086457  882202 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:03:05.505278  882202 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:03:06.206049  882202 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:03:06.713775  882202 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:03:06.714324  882202 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:03:06.717007  882202 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:03:06.720088  882202 out.go:252]   - Booting up control plane ...
	I1119 22:03:06.720192  882202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:03:06.720284  882202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:03:06.720356  882202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:03:06.735454  882202 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:03:06.735586  882202 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:03:06.743330  882202 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:03:06.743646  882202 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:03:06.743766  882202 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:03:06.879384  882202 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:03:06.879502  882202 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:03:08.378567  882202 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50086341s
	I1119 22:03:08.382232  882202 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:03:08.382325  882202 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1119 22:03:08.382416  882202 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:03:08.382497  882202 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:03:11.919748  882202 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.536500272s
	I1119 22:03:13.374363  882202 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.992154721s
	I1119 22:03:14.384458  882202 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001547158s
	I1119 22:03:14.405195  882202 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:03:14.422589  882202 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:03:14.439931  882202 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:03:14.440143  882202 kubeadm.go:319] [mark-control-plane] Marking the node functional-642533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:03:14.460813  882202 kubeadm.go:319] [bootstrap-token] Using token: l706s5.87twp216t3bdhmsl
	I1119 22:03:14.463686  882202 out.go:252]   - Configuring RBAC rules ...
	I1119 22:03:14.463806  882202 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:03:14.471254  882202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:03:14.480642  882202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:03:14.485937  882202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:03:14.492826  882202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:03:14.496904  882202 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:03:14.791288  882202 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:03:15.231802  882202 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:03:15.793160  882202 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:03:15.793172  882202 kubeadm.go:319] 
	I1119 22:03:15.793242  882202 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:03:15.793246  882202 kubeadm.go:319] 
	I1119 22:03:15.793351  882202 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:03:15.793361  882202 kubeadm.go:319] 
	I1119 22:03:15.793393  882202 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:03:15.793460  882202 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:03:15.793525  882202 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:03:15.793529  882202 kubeadm.go:319] 
	I1119 22:03:15.793595  882202 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:03:15.793598  882202 kubeadm.go:319] 
	I1119 22:03:15.793676  882202 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:03:15.793680  882202 kubeadm.go:319] 
	I1119 22:03:15.793754  882202 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:03:15.793849  882202 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:03:15.793930  882202 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:03:15.793933  882202 kubeadm.go:319] 
	I1119 22:03:15.794034  882202 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:03:15.794120  882202 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:03:15.794123  882202 kubeadm.go:319] 
	I1119 22:03:15.794226  882202 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8441 --token l706s5.87twp216t3bdhmsl \
	I1119 22:03:15.794364  882202 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 22:03:15.794387  882202 kubeadm.go:319] 	--control-plane 
	I1119 22:03:15.794391  882202 kubeadm.go:319] 
	I1119 22:03:15.794486  882202 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:03:15.794489  882202 kubeadm.go:319] 
	I1119 22:03:15.794585  882202 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8441 --token l706s5.87twp216t3bdhmsl \
	I1119 22:03:15.794707  882202 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 22:03:15.797803  882202 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:03:15.798048  882202 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:03:15.798166  882202 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:03:15.798183  882202 cni.go:84] Creating CNI manager for ""
	I1119 22:03:15.798189  882202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:03:15.803420  882202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:03:15.806210  882202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:03:15.810610  882202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:03:15.810621  882202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:03:15.824546  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:03:16.083643  882202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:03:16.083773  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:16.083858  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes functional-642533 minikube.k8s.io/updated_at=2025_11_19T22_03_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=functional-642533 minikube.k8s.io/primary=true
	I1119 22:03:16.226989  882202 ops.go:34] apiserver oom_adj: -16
	I1119 22:03:16.227097  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:16.728081  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:17.228100  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:17.727379  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:18.227380  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:18.728034  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:19.227267  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:19.727992  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:20.227600  882202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:03:20.386510  882202 kubeadm.go:1114] duration metric: took 4.302786345s to wait for elevateKubeSystemPrivileges
	I1119 22:03:20.386527  882202 kubeadm.go:403] duration metric: took 4m48.060017151s to StartCluster
	I1119 22:03:20.386542  882202 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:03:20.386603  882202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:03:20.387271  882202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:03:20.387470  882202 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:03:20.387732  882202 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:03:20.387772  882202 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:03:20.387832  882202 addons.go:70] Setting storage-provisioner=true in profile "functional-642533"
	I1119 22:03:20.387843  882202 addons.go:239] Setting addon storage-provisioner=true in "functional-642533"
	W1119 22:03:20.387848  882202 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:03:20.387870  882202 host.go:66] Checking if "functional-642533" exists ...
	I1119 22:03:20.388311  882202 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
	I1119 22:03:20.388708  882202 addons.go:70] Setting default-storageclass=true in profile "functional-642533"
	I1119 22:03:20.388723  882202 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-642533"
	I1119 22:03:20.388996  882202 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
	I1119 22:03:20.391166  882202 out.go:179] * Verifying Kubernetes components...
	I1119 22:03:20.394348  882202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:03:20.436746  882202 addons.go:239] Setting addon default-storageclass=true in "functional-642533"
	W1119 22:03:20.436757  882202 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:03:20.436781  882202 host.go:66] Checking if "functional-642533" exists ...
	I1119 22:03:20.437185  882202 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
	I1119 22:03:20.439734  882202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:03:20.446562  882202 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:03:20.446573  882202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:03:20.446636  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 22:03:20.465149  882202 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:03:20.465177  882202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:03:20.465239  882202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 22:03:20.485562  882202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 22:03:20.507113  882202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 22:03:20.717200  882202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:03:20.748123  882202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:03:20.771354  882202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:03:21.468233  882202 node_ready.go:35] waiting up to 6m0s for node "functional-642533" to be "Ready" ...
	I1119 22:03:21.496529  882202 node_ready.go:49] node "functional-642533" is "Ready"
	I1119 22:03:21.496543  882202 node_ready.go:38] duration metric: took 27.725247ms for node "functional-642533" to be "Ready" ...
	I1119 22:03:21.496554  882202 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:03:21.496623  882202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:03:21.503461  882202 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:03:21.506614  882202 addons.go:515] duration metric: took 1.118822279s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:03:21.517959  882202 api_server.go:72] duration metric: took 1.130461122s to wait for apiserver process to appear ...
	I1119 22:03:21.517973  882202 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:03:21.518002  882202 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 22:03:21.527565  882202 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1119 22:03:21.528649  882202 api_server.go:141] control plane version: v1.34.1
	I1119 22:03:21.528678  882202 api_server.go:131] duration metric: took 10.698608ms to wait for apiserver health ...
	I1119 22:03:21.528686  882202 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:03:21.532666  882202 system_pods.go:59] 9 kube-system pods found
	I1119 22:03:21.532698  882202 system_pods.go:61] "coredns-66bc5c9577-27dgj" [318d3db3-4010-4fb8-a9d4-5a0a185f43e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:03:21.532704  882202 system_pods.go:61] "coredns-66bc5c9577-9qtgt" [96f32484-c314-44ad-8509-59e708c06247] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:03:21.532713  882202 system_pods.go:61] "etcd-functional-642533" [775e3397-a5b2-4427-b3ac-4cb3325267de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:03:21.532718  882202 system_pods.go:61] "kindnet-wxxr9" [b88416ec-c0d6-421c-9a7b-ae3e00518f3a] Running
	I1119 22:03:21.532725  882202 system_pods.go:61] "kube-apiserver-functional-642533" [81afa158-d652-49c6-b69f-ea029a74d07a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:03:21.532729  882202 system_pods.go:61] "kube-controller-manager-functional-642533" [445e90d7-da9e-471d-b34d-3c570a6aa512] Running
	I1119 22:03:21.532733  882202 system_pods.go:61] "kube-proxy-x4x5p" [bcc51b44-23bb-410f-9006-6878661f412b] Running
	I1119 22:03:21.532738  882202 system_pods.go:61] "kube-scheduler-functional-642533" [937ab3b1-088d-42e7-847d-fbf4237b48bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:03:21.532743  882202 system_pods.go:61] "storage-provisioner" [99d5d456-455a-46aa-96b0-ce59fcfcd1e7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:03:21.532747  882202 system_pods.go:74] duration metric: took 4.057154ms to wait for pod list to return data ...
	I1119 22:03:21.532765  882202 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:03:21.535683  882202 default_sa.go:45] found service account: "default"
	I1119 22:03:21.535696  882202 default_sa.go:55] duration metric: took 2.926393ms for default service account to be created ...
	I1119 22:03:21.535709  882202 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:03:21.539033  882202 system_pods.go:86] 9 kube-system pods found
	I1119 22:03:21.539056  882202 system_pods.go:89] "coredns-66bc5c9577-27dgj" [318d3db3-4010-4fb8-a9d4-5a0a185f43e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:03:21.539071  882202 system_pods.go:89] "coredns-66bc5c9577-9qtgt" [96f32484-c314-44ad-8509-59e708c06247] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:03:21.539089  882202 system_pods.go:89] "etcd-functional-642533" [775e3397-a5b2-4427-b3ac-4cb3325267de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:03:21.539097  882202 system_pods.go:89] "kindnet-wxxr9" [b88416ec-c0d6-421c-9a7b-ae3e00518f3a] Running
	I1119 22:03:21.539105  882202 system_pods.go:89] "kube-apiserver-functional-642533" [81afa158-d652-49c6-b69f-ea029a74d07a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:03:21.539109  882202 system_pods.go:89] "kube-controller-manager-functional-642533" [445e90d7-da9e-471d-b34d-3c570a6aa512] Running
	I1119 22:03:21.539113  882202 system_pods.go:89] "kube-proxy-x4x5p" [bcc51b44-23bb-410f-9006-6878661f412b] Running
	I1119 22:03:21.539123  882202 system_pods.go:89] "kube-scheduler-functional-642533" [937ab3b1-088d-42e7-847d-fbf4237b48bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:03:21.539128  882202 system_pods.go:89] "storage-provisioner" [99d5d456-455a-46aa-96b0-ce59fcfcd1e7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:03:21.539141  882202 system_pods.go:126] duration metric: took 3.426591ms to wait for k8s-apps to be running ...
	I1119 22:03:21.539161  882202 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:03:21.539218  882202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:03:21.561532  882202 system_svc.go:56] duration metric: took 22.371874ms WaitForService to wait for kubelet
	I1119 22:03:21.561550  882202 kubeadm.go:587] duration metric: took 1.174060356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:03:21.561570  882202 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:03:21.564871  882202 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:03:21.564895  882202 node_conditions.go:123] node cpu capacity is 2
	I1119 22:03:21.564915  882202 node_conditions.go:105] duration metric: took 3.340452ms to run NodePressure ...
	I1119 22:03:21.564926  882202 start.go:242] waiting for startup goroutines ...
	I1119 22:03:21.564933  882202 start.go:247] waiting for cluster config update ...
	I1119 22:03:21.564942  882202 start.go:256] writing updated cluster config ...
	I1119 22:03:21.565287  882202 ssh_runner.go:195] Run: rm -f paused
	I1119 22:03:21.569260  882202 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:03:21.573265  882202 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-27dgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:22.079264  882202 pod_ready.go:94] pod "coredns-66bc5c9577-27dgj" is "Ready"
	I1119 22:03:22.079280  882202 pod_ready.go:86] duration metric: took 506.002895ms for pod "coredns-66bc5c9577-27dgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:22.079287  882202 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9qtgt" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:03:24.085188  882202 pod_ready.go:104] pod "coredns-66bc5c9577-9qtgt" is not "Ready", error: <nil>
	W1119 22:03:26.085368  882202 pod_ready.go:104] pod "coredns-66bc5c9577-9qtgt" is not "Ready", error: <nil>
	W1119 22:03:28.086046  882202 pod_ready.go:104] pod "coredns-66bc5c9577-9qtgt" is not "Ready", error: <nil>
	W1119 22:03:30.584467  882202 pod_ready.go:104] pod "coredns-66bc5c9577-9qtgt" is not "Ready", error: <nil>
	I1119 22:03:31.084590  882202 pod_ready.go:94] pod "coredns-66bc5c9577-9qtgt" is "Ready"
	I1119 22:03:31.084606  882202 pod_ready.go:86] duration metric: took 9.005313135s for pod "coredns-66bc5c9577-9qtgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.087378  882202 pod_ready.go:83] waiting for pod "etcd-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.092441  882202 pod_ready.go:94] pod "etcd-functional-642533" is "Ready"
	I1119 22:03:31.092455  882202 pod_ready.go:86] duration metric: took 5.064025ms for pod "etcd-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.094742  882202 pod_ready.go:83] waiting for pod "kube-apiserver-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.099417  882202 pod_ready.go:94] pod "kube-apiserver-functional-642533" is "Ready"
	I1119 22:03:31.099431  882202 pod_ready.go:86] duration metric: took 4.675515ms for pod "kube-apiserver-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.101804  882202 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.283328  882202 pod_ready.go:94] pod "kube-controller-manager-functional-642533" is "Ready"
	I1119 22:03:31.283343  882202 pod_ready.go:86] duration metric: took 181.52662ms for pod "kube-controller-manager-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.483541  882202 pod_ready.go:83] waiting for pod "kube-proxy-x4x5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:31.882750  882202 pod_ready.go:94] pod "kube-proxy-x4x5p" is "Ready"
	I1119 22:03:31.882764  882202 pod_ready.go:86] duration metric: took 399.210461ms for pod "kube-proxy-x4x5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:32.083438  882202 pod_ready.go:83] waiting for pod "kube-scheduler-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:32.483088  882202 pod_ready.go:94] pod "kube-scheduler-functional-642533" is "Ready"
	I1119 22:03:32.483102  882202 pod_ready.go:86] duration metric: took 399.64802ms for pod "kube-scheduler-functional-642533" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:03:32.483113  882202 pod_ready.go:40] duration metric: took 10.91381957s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:03:32.534262  882202 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:03:32.537415  882202 out.go:179] * Done! kubectl is now configured to use "functional-642533" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.163337122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.173530214Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-v2sld Namespace:default ID:44df386cdfb5b19c88c03156d231b3145eb8fab39ca185af223fd4165da3be32 UID:8c450599-184a-4a26-8c2a-d5a4eae88302 NetNS:/var/run/netns/5334daef-a50f-4e7e-943f-14bce832f3d5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078200}] Aliases:map[]}"
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.173570789Z" level=info msg="Adding pod default_hello-node-75c85bcc94-v2sld to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.187096456Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-v2sld Namespace:default ID:44df386cdfb5b19c88c03156d231b3145eb8fab39ca185af223fd4165da3be32 UID:8c450599-184a-4a26-8c2a-d5a4eae88302 NetNS:/var/run/netns/5334daef-a50f-4e7e-943f-14bce832f3d5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078200}] Aliases:map[]}"
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.187253816Z" level=info msg="Checking pod default_hello-node-75c85bcc94-v2sld for CNI network kindnet (type=ptp)"
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.190222547Z" level=info msg="Ran pod sandbox 44df386cdfb5b19c88c03156d231b3145eb8fab39ca185af223fd4165da3be32 with infra container: default/hello-node-75c85bcc94-v2sld/POD" id=a926f769-30fe-49ca-a72b-89bc47008fc8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.194327594Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d24a694c-ebec-48b0-b2c3-8f6ea948ce06 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:04:05 functional-642533 crio[3531]: time="2025-11-19T22:04:05.205028155Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7fd66cc9-a2f5-4d0c-8054-4fa19fd22aa8 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.228264334Z" level=info msg="Stopping pod sandbox: 5c8c6854d0a219eb63863b6ec58be09d4e1ff766a8ab3827be8012f321eecbfc" id=572a5671-1915-4ae1-8f4e-25b76e54f598 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.228322066Z" level=info msg="Stopped pod sandbox (already stopped): 5c8c6854d0a219eb63863b6ec58be09d4e1ff766a8ab3827be8012f321eecbfc" id=572a5671-1915-4ae1-8f4e-25b76e54f598 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.228928662Z" level=info msg="Removing pod sandbox: 5c8c6854d0a219eb63863b6ec58be09d4e1ff766a8ab3827be8012f321eecbfc" id=0b5e8abe-636b-4c2a-a9ab-f964a56d4401 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.23266463Z" level=info msg="Removed pod sandbox: 5c8c6854d0a219eb63863b6ec58be09d4e1ff766a8ab3827be8012f321eecbfc" id=0b5e8abe-636b-4c2a-a9ab-f964a56d4401 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.233164976Z" level=info msg="Stopping pod sandbox: 28cd0c9d5554bbc1d82afb3cdfec4c319696f9c58fe8bbad80e6d79cd0bc55e5" id=c7dc55dd-895d-4089-9d31-24956d69bc7e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.233210466Z" level=info msg="Stopped pod sandbox (already stopped): 28cd0c9d5554bbc1d82afb3cdfec4c319696f9c58fe8bbad80e6d79cd0bc55e5" id=c7dc55dd-895d-4089-9d31-24956d69bc7e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.233522018Z" level=info msg="Removing pod sandbox: 28cd0c9d5554bbc1d82afb3cdfec4c319696f9c58fe8bbad80e6d79cd0bc55e5" id=4cdcca10-18a3-4043-96ca-bd34f71c344f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 22:04:15 functional-642533 crio[3531]: time="2025-11-19T22:04:15.237019576Z" level=info msg="Removed pod sandbox: 28cd0c9d5554bbc1d82afb3cdfec4c319696f9c58fe8bbad80e6d79cd0bc55e5" id=4cdcca10-18a3-4043-96ca-bd34f71c344f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 22:04:17 functional-642533 crio[3531]: time="2025-11-19T22:04:17.204424595Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bf01b7e8-bb33-44ca-82df-64a69b6fb7f3 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:04:30 functional-642533 crio[3531]: time="2025-11-19T22:04:30.203664421Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f3f12e42-e119-468e-b0d3-cca6fda9d35f name=/runtime.v1.ImageService/PullImage
	Nov 19 22:04:40 functional-642533 crio[3531]: time="2025-11-19T22:04:40.204069264Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4cc77b56-7fdd-41c3-bf1a-f5ebc7e449f1 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:05:13 functional-642533 crio[3531]: time="2025-11-19T22:05:13.203436479Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=20c4e676-0f0c-437e-b86e-a6acf4d9dd9c name=/runtime.v1.ImageService/PullImage
	Nov 19 22:05:31 functional-642533 crio[3531]: time="2025-11-19T22:05:31.205625409Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5ea8cb73-7feb-4afa-a662-0cc14d2c8e49 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:06:39 functional-642533 crio[3531]: time="2025-11-19T22:06:39.203199635Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=94b99e91-fc61-4d32-b58d-2e141eaaefea name=/runtime.v1.ImageService/PullImage
	Nov 19 22:06:59 functional-642533 crio[3531]: time="2025-11-19T22:06:59.203739996Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1f9a2e21-85d1-4147-ba9a-095a4f78ed88 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:09:28 functional-642533 crio[3531]: time="2025-11-19T22:09:28.203911972Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e442b119-aa29-41c2-957a-5724a27dbd3a name=/runtime.v1.ImageService/PullImage
	Nov 19 22:09:41 functional-642533 crio[3531]: time="2025-11-19T22:09:41.204142287Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0870158f-797f-407f-96a3-5c9108d699ac name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	84b1ea2edc6fc       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   e40c04d4958a1       sp-pod                                      default
	2dd66d0240cff       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   5c72ab152eb88       nginx-svc                                   default
	0902ac40d048a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       0                   ccae119f030b6       storage-provisioner                         kube-system
	cba4b3eec231e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   0                   704ffd5be208f       coredns-66bc5c9577-27dgj                    kube-system
	276c4f25489d8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   0                   870698ef495e9       coredns-66bc5c9577-9qtgt                    kube-system
	b56319d96aaa3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               0                   3758c390b0863       kindnet-wxxr9                               kube-system
	204aae33cfd19       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                0                   6768534016aa8       kube-proxy-x4x5p                            kube-system
	a07620d3627f1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   8                   79f988297a953       kube-controller-manager-functional-642533   kube-system
	c4dd647cd2f22       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            4                   0a41f86461654       kube-scheduler-functional-642533            kube-system
	3c9cae16313d9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      4                   b80ef8047e06d       etcd-functional-642533                      kube-system
	568920a59e54a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   bd74e93f51c62       kube-apiserver-functional-642533            kube-system
	
	
	==> coredns [276c4f25489d8e5a81caaf9f40184b9f86967144352bed03cda8bc7be7280475] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	
	
	==> coredns [cba4b3eec231e35a61f4410d381187182a17fd4d297d25ad057d2d21fc7070c8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               functional-642533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-642533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=functional-642533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_03_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:03:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-642533
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:13:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:13:18 +0000   Wed, 19 Nov 2025 22:03:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:13:18 +0000   Wed, 19 Nov 2025 22:03:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:13:18 +0000   Wed, 19 Nov 2025 22:03:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:13:18 +0000   Wed, 19 Nov 2025 22:03:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-642533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                49a978cd-6228-40fe-b205-27ab98ac5842
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-v2sld                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  default                     hello-node-connect-7d85dfc575-sxlzs          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-27dgj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-66bc5c9577-9qtgt                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-functional-642533                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-wxxr9                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-functional-642533             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-642533    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-x4x5p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-functional-642533             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-642533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-642533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-642533 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m                kubelet          Node functional-642533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                kubelet          Node functional-642533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                kubelet          Node functional-642533 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-642533 event: Registered Node functional-642533 in Controller
	
	
	==> dmesg <==
	[Nov19 21:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov19 21:49] overlayfs: idmapped layers are currently not supported
	[  +0.079274] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov19 21:55] overlayfs: idmapped layers are currently not supported
	[Nov19 21:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3c9cae16313d93160c0f5236e20bce689731acaa0e5017df6b749a834a05c718] <==
	{"level":"warn","ts":"2025-11-19T22:03:10.836143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:10.876952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:10.878806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:10.903863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:10.911783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:10.969384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:10.982213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:10.989158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.021836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.053133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.065527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.079336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.093559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.119648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.137555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.154466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.169594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.193458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.231702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.256907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.298414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:03:11.446948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57490","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:13:10.016926Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":918}
	{"level":"info","ts":"2025-11-19T22:13:10.026204Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":918,"took":"8.536824ms","hash":1090121480,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2666496,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-11-19T22:13:10.026282Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1090121480,"revision":918,"compact-revision":-1}
	
	
	==> kernel <==
	 22:13:52 up  3:56,  0 user,  load average: 0.16, 0.24, 0.72
	Linux functional-642533 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b56319d96aaa3444840944d273e24036fa1d587fb9ac75effe8380216a343445] <==
	I1119 22:11:51.030446       1 main.go:301] handling current node
	I1119 22:12:01.038069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:12:01.038104       1 main.go:301] handling current node
	I1119 22:12:11.039638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:12:11.039673       1 main.go:301] handling current node
	I1119 22:12:21.036896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:12:21.036938       1 main.go:301] handling current node
	I1119 22:12:31.030406       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:12:31.030443       1 main.go:301] handling current node
	I1119 22:12:41.030367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:12:41.030512       1 main.go:301] handling current node
	I1119 22:12:51.030417       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:12:51.030458       1 main.go:301] handling current node
	I1119 22:13:01.032917       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:13:01.032955       1 main.go:301] handling current node
	I1119 22:13:11.031735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:13:11.031798       1 main.go:301] handling current node
	I1119 22:13:21.031029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:13:21.031156       1 main.go:301] handling current node
	I1119 22:13:31.038954       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:13:31.038991       1 main.go:301] handling current node
	I1119 22:13:41.039513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:13:41.039621       1 main.go:301] handling current node
	I1119 22:13:51.030990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:13:51.031029       1 main.go:301] handling current node
	
	
	==> kube-apiserver [568920a59e54a00517d7f32c1fc53acd45fb5504ac725e50489c9e64b58294d0] <==
	I1119 22:03:12.508778       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:03:12.514222       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:03:13.233932       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:03:13.242512       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:03:13.242594       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:03:14.097565       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:03:14.153054       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:03:14.235142       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:03:14.244791       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1119 22:03:14.246016       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:03:14.256688       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:03:14.416735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:03:15.211471       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:03:15.230222       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:03:15.244984       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:03:19.470417       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:03:19.477305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:03:20.116990       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:03:20.536753       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:03:36.012312       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.154.51"}
	I1119 22:03:41.823070       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.173.105"}
	I1119 22:03:50.498274       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.57.31"}
	E1119 22:03:57.877362       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:43872: use of closed network connection
	I1119 22:04:04.936942       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.246.148"}
	I1119 22:13:12.395078       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a07620d3627f164f220b8d55ba510ffac8de3fa39c98936461aac414d7b19f7f] <==
	I1119 22:03:19.417514       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:03:19.417532       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:03:19.417536       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:03:19.417541       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:03:19.421690       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:03:19.423439       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:03:19.428005       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:03:19.430332       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-642533" podCIDRs=["10.244.0.0/24"]
	I1119 22:03:19.434573       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:03:19.451097       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:03:19.456302       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:03:19.460891       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:03:19.461082       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:03:19.461119       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:03:19.461198       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:03:19.461290       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:03:19.461179       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:03:19.461518       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:03:19.461926       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:03:19.462654       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:03:19.462723       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:03:19.463925       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:03:19.464032       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:03:19.466335       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:03:19.472873       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [204aae33cfd198a5888b973337552c4fcce90e5e8b28e5321aa8f9fbb1b17afa] <==
	I1119 22:03:20.904666       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:03:20.988953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:03:21.090073       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:03:21.090117       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 22:03:21.090216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:03:21.323566       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:03:21.323621       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:03:21.353591       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:03:21.357279       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:03:21.357324       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:03:21.358555       1 config.go:200] "Starting service config controller"
	I1119 22:03:21.358585       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:03:21.358603       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:03:21.358608       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:03:21.358619       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:03:21.358623       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:03:21.359425       1 config.go:309] "Starting node config controller"
	I1119 22:03:21.359446       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:03:21.359453       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:03:21.459082       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:03:21.459123       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:03:21.459170       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c4dd647cd2f229dc0578f949e4cbcf76401ea2e516eded4e217245c7e49fdb53] <==
	I1119 22:03:13.364470       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:03:13.366574       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:03:13.366661       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:03:13.366991       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:03:13.367070       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 22:03:13.379457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:03:13.379676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:03:13.379688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:03:13.379753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:03:13.379811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:03:13.379868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:03:13.379912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:03:13.379959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:03:13.380004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:03:13.380046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:03:13.380080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:03:13.380116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:03:13.380882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:03:13.380963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:03:13.381035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:03:13.381124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:03:13.381149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:03:13.381407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:03:13.381507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1119 22:03:14.267034       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:11:16 functional-642533 kubelet[14827]: E1119 22:11:16.203441   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:11:20 functional-642533 kubelet[14827]: E1119 22:11:20.203129   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:11:28 functional-642533 kubelet[14827]: E1119 22:11:28.203554   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:11:32 functional-642533 kubelet[14827]: E1119 22:11:32.203252   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:11:39 functional-642533 kubelet[14827]: E1119 22:11:39.203427   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:11:44 functional-642533 kubelet[14827]: E1119 22:11:44.203694   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:11:52 functional-642533 kubelet[14827]: E1119 22:11:52.203331   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:11:55 functional-642533 kubelet[14827]: E1119 22:11:55.203321   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:12:06 functional-642533 kubelet[14827]: E1119 22:12:06.203191   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:12:09 functional-642533 kubelet[14827]: E1119 22:12:09.203410   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:12:19 functional-642533 kubelet[14827]: E1119 22:12:19.203894   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:12:21 functional-642533 kubelet[14827]: E1119 22:12:21.205391   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:12:32 functional-642533 kubelet[14827]: E1119 22:12:32.203415   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:12:32 functional-642533 kubelet[14827]: E1119 22:12:32.203571   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:12:44 functional-642533 kubelet[14827]: E1119 22:12:44.203665   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:12:47 functional-642533 kubelet[14827]: E1119 22:12:47.203255   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:12:55 functional-642533 kubelet[14827]: E1119 22:12:55.203684   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:13:00 functional-642533 kubelet[14827]: E1119 22:13:00.203960   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:13:07 functional-642533 kubelet[14827]: E1119 22:13:07.203642   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:13:15 functional-642533 kubelet[14827]: E1119 22:13:15.204186   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:13:18 functional-642533 kubelet[14827]: E1119 22:13:18.203160   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:13:29 functional-642533 kubelet[14827]: E1119 22:13:29.203843   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:13:33 functional-642533 kubelet[14827]: E1119 22:13:33.203149   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	Nov 19 22:13:43 functional-642533 kubelet[14827]: E1119 22:13:43.203183   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-sxlzs" podUID="e178097e-0efa-4f2d-9d4c-70558b0dd158"
	Nov 19 22:13:47 functional-642533 kubelet[14827]: E1119 22:13:47.204046   14827 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-v2sld" podUID="8c450599-184a-4a26-8c2a-d5a4eae88302"
	
	
	==> storage-provisioner [0902ac40d048a0cc96f2a3be174b27fda51c2652d533694df2da926462b2b5b0] <==
	W1119 22:13:28.804540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:30.807595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:30.811743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:32.814340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:32.820853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:34.824507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:34.829018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:36.831931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:36.838438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:38.841561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:38.846130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:40.848963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:40.856253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:42.859513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:42.863978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:44.868012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:44.872844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:46.876259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:46.880573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:48.883238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:48.890091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:50.893581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:50.899198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:52.902500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:13:52.908048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-642533 -n functional-642533
helpers_test.go:269: (dbg) Run:  kubectl --context functional-642533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-v2sld hello-node-connect-7d85dfc575-sxlzs
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-642533 describe pod hello-node-75c85bcc94-v2sld hello-node-connect-7d85dfc575-sxlzs
helpers_test.go:290: (dbg) kubectl --context functional-642533 describe pod hello-node-75c85bcc94-v2sld hello-node-connect-7d85dfc575-sxlzs:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-v2sld
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-642533/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 22:04:04 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d7cvp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-d7cvp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m50s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-v2sld to functional-642533
	  Normal   Pulling    6m55s (x5 over 9m49s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m55s (x5 over 9m49s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m55s (x5 over 9m49s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m41s (x22 over 9m49s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m41s (x22 over 9m49s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-sxlzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-642533/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 22:03:50 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gb5gj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gb5gj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-sxlzs to functional-642533
	  Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (604.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-642533 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-642533 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-v2sld" [8c450599-184a-4a26-8c2a-d5a4eae88302] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1119 22:06:54.908150  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:11:54.908909  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:13:17.994904  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-642533 -n functional-642533
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-19 22:14:05.374613581 +0000 UTC m=+1588.281937344
functional_test.go:1460: (dbg) Run:  kubectl --context functional-642533 describe po hello-node-75c85bcc94-v2sld -n default
functional_test.go:1460: (dbg) kubectl --context functional-642533 describe po hello-node-75c85bcc94-v2sld -n default:
Name:             hello-node-75c85bcc94-v2sld
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-642533/192.168.49.2
Start Time:       Wed, 19 Nov 2025 22:04:04 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d7cvp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-d7cvp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-v2sld to functional-642533
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x22 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-642533 logs hello-node-75c85bcc94-v2sld -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-642533 logs hello-node-75c85bcc94-v2sld -n default: exit status 1 (101.737736ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-v2sld" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-642533 logs hello-node-75c85bcc94-v2sld -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 service --namespace=default --https --url hello-node: exit status 115 (515.317736ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31381
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-642533 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 service hello-node --url --format={{.IP}}: exit status 115 (506.495938ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-642533 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 service hello-node --url: exit status 115 (507.145889ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31381
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-642533 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31381
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image load --daemon kicbase/echo-server:functional-642533 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-642533" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image load --daemon kicbase/echo-server:functional-642533 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-642533" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-642533
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image load --daemon kicbase/echo-server:functional-642533 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 image load --daemon kicbase/echo-server:functional-642533 --alsologtostderr: (2.769440207s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-642533" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image save kicbase/echo-server:functional-642533 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1119 22:14:20.337412  900592 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:14:20.337558  900592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:14:20.337570  900592 out.go:374] Setting ErrFile to fd 2...
	I1119 22:14:20.337575  900592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:14:20.337837  900592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:14:20.338500  900592 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:14:20.338620  900592 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:14:20.339111  900592 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
	I1119 22:14:20.357629  900592 ssh_runner.go:195] Run: systemctl --version
	I1119 22:14:20.357691  900592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
	I1119 22:14:20.375151  900592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
	I1119 22:14:20.477550  900592 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1119 22:14:20.477638  900592 cache_images.go:255] Failed to load cached images for "functional-642533": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1119 22:14:20.477683  900592 cache_images.go:267] failed pushing to: functional-642533

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-642533
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image save --daemon kicbase/echo-server:functional-642533 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-642533
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-642533: exit status 1 (17.496929ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-642533

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-642533

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-727330 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-727330 --output=json --user=testUser: exit status 80 (1.782037444s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c3c2ef3-1243-46ea-93bc-24e2a0766758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-727330 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6e7c3607-cab1-44de-a432-16c816dec141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T22:27:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"4702dce0-9de9-4b1f-b0c0-6bf96c976ed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-727330 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-727330 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-727330 --output=json --user=testUser: exit status 80 (1.429654196s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5e78f247-4e65-40e0-9740-a33249f0cf87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-727330 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"95f54e80-ec02-4d9b-9546-64025712a8fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T22:27:09Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"70e7a280-ae88-44a3-97eb-08539548160e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-727330 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.43s)

                                                
                                    
x
+
TestPause/serial/Pause (6.6s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-743639 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-743639 --alsologtostderr -v=5: exit status 80 (1.897565082s)

                                                
                                                
-- stdout --
	* Pausing node pause-743639 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:49:50.896697 1034492 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:49:50.898569 1034492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:49:50.898631 1034492 out.go:374] Setting ErrFile to fd 2...
	I1119 22:49:50.898651 1034492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:49:50.899042 1034492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:49:50.899416 1034492 out.go:368] Setting JSON to false
	I1119 22:49:50.899473 1034492 mustload.go:66] Loading cluster: pause-743639
	I1119 22:49:50.899973 1034492 config.go:182] Loaded profile config "pause-743639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:49:50.900519 1034492 cli_runner.go:164] Run: docker container inspect pause-743639 --format={{.State.Status}}
	I1119 22:49:50.928515 1034492 host.go:66] Checking if "pause-743639" exists ...
	I1119 22:49:50.928833 1034492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:49:51.027509 1034492 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:49:51.015452751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:49:51.029380 1034492 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-743639 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:49:51.034757 1034492 out.go:179] * Pausing node pause-743639 ... 
	I1119 22:49:51.037662 1034492 host.go:66] Checking if "pause-743639" exists ...
	I1119 22:49:51.038007 1034492 ssh_runner.go:195] Run: systemctl --version
	I1119 22:49:51.038058 1034492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:51.058129 1034492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:51.162102 1034492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:49:51.176895 1034492 pause.go:52] kubelet running: true
	I1119 22:49:51.177051 1034492 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:49:51.463056 1034492 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:49:51.463143 1034492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:49:51.546923 1034492 cri.go:89] found id: "f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec"
	I1119 22:49:51.546943 1034492 cri.go:89] found id: "16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8"
	I1119 22:49:51.546948 1034492 cri.go:89] found id: "ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd"
	I1119 22:49:51.546951 1034492 cri.go:89] found id: "f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27"
	I1119 22:49:51.546954 1034492 cri.go:89] found id: "11c8fd6c6f9713d038f953487476bba6ffaa917653a267e0712298f3cab327c5"
	I1119 22:49:51.546958 1034492 cri.go:89] found id: "41ef975cdc8623c00414594a8d3d94a614a8ef3fc161f854937339028de9efcb"
	I1119 22:49:51.546961 1034492 cri.go:89] found id: "6a370518913d3e600d8195c94e859b6072467940e2e7a356b0513e7dd8dfa80f"
	I1119 22:49:51.546964 1034492 cri.go:89] found id: "a1d839c4dd761c3832961fa26ddbc6aeb0489bf5f7809ec819b48181e9a4fc16"
	I1119 22:49:51.546967 1034492 cri.go:89] found id: "4f23bd39b1b7b5a9e2324bcf5dc7394750a2e3300c282cc346f9fe063693c579"
	I1119 22:49:51.546974 1034492 cri.go:89] found id: "071f23bf7702e74d792529400c719dce663750e2ca40b7dfd1c27b2a0ce3c622"
	I1119 22:49:51.546977 1034492 cri.go:89] found id: "3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb"
	I1119 22:49:51.546980 1034492 cri.go:89] found id: "b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924"
	I1119 22:49:51.546983 1034492 cri.go:89] found id: "68a763f87346de7fced7d468257393247211fbec3248491d35175f813cedf14c"
	I1119 22:49:51.546986 1034492 cri.go:89] found id: "1b88acdd5193545bf45d9dbee41f2a40a0ea8fc645842332f6efc20238afc997"
	I1119 22:49:51.546989 1034492 cri.go:89] found id: ""
	I1119 22:49:51.547048 1034492 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:49:51.557953 1034492 retry.go:31] will retry after 237.094846ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:49:51Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:49:51.795417 1034492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:49:51.809070 1034492 pause.go:52] kubelet running: false
	I1119 22:49:51.809142 1034492 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:49:51.956179 1034492 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:49:51.956268 1034492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:49:52.029131 1034492 cri.go:89] found id: "f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec"
	I1119 22:49:52.029156 1034492 cri.go:89] found id: "16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8"
	I1119 22:49:52.029161 1034492 cri.go:89] found id: "ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd"
	I1119 22:49:52.029165 1034492 cri.go:89] found id: "f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27"
	I1119 22:49:52.029168 1034492 cri.go:89] found id: "11c8fd6c6f9713d038f953487476bba6ffaa917653a267e0712298f3cab327c5"
	I1119 22:49:52.029172 1034492 cri.go:89] found id: "41ef975cdc8623c00414594a8d3d94a614a8ef3fc161f854937339028de9efcb"
	I1119 22:49:52.029175 1034492 cri.go:89] found id: "6a370518913d3e600d8195c94e859b6072467940e2e7a356b0513e7dd8dfa80f"
	I1119 22:49:52.029178 1034492 cri.go:89] found id: "a1d839c4dd761c3832961fa26ddbc6aeb0489bf5f7809ec819b48181e9a4fc16"
	I1119 22:49:52.029181 1034492 cri.go:89] found id: "4f23bd39b1b7b5a9e2324bcf5dc7394750a2e3300c282cc346f9fe063693c579"
	I1119 22:49:52.029222 1034492 cri.go:89] found id: "071f23bf7702e74d792529400c719dce663750e2ca40b7dfd1c27b2a0ce3c622"
	I1119 22:49:52.029232 1034492 cri.go:89] found id: "3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb"
	I1119 22:49:52.029237 1034492 cri.go:89] found id: "b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924"
	I1119 22:49:52.029240 1034492 cri.go:89] found id: "68a763f87346de7fced7d468257393247211fbec3248491d35175f813cedf14c"
	I1119 22:49:52.029243 1034492 cri.go:89] found id: "1b88acdd5193545bf45d9dbee41f2a40a0ea8fc645842332f6efc20238afc997"
	I1119 22:49:52.029246 1034492 cri.go:89] found id: ""
	I1119 22:49:52.029318 1034492 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:49:52.040630 1034492 retry.go:31] will retry after 409.121165ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:49:52Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:49:52.450272 1034492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:49:52.463329 1034492 pause.go:52] kubelet running: false
	I1119 22:49:52.463395 1034492 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:49:52.613706 1034492 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:49:52.613820 1034492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:49:52.685589 1034492 cri.go:89] found id: "f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec"
	I1119 22:49:52.685613 1034492 cri.go:89] found id: "16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8"
	I1119 22:49:52.685618 1034492 cri.go:89] found id: "ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd"
	I1119 22:49:52.685631 1034492 cri.go:89] found id: "f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27"
	I1119 22:49:52.685635 1034492 cri.go:89] found id: "11c8fd6c6f9713d038f953487476bba6ffaa917653a267e0712298f3cab327c5"
	I1119 22:49:52.685640 1034492 cri.go:89] found id: "41ef975cdc8623c00414594a8d3d94a614a8ef3fc161f854937339028de9efcb"
	I1119 22:49:52.685643 1034492 cri.go:89] found id: "6a370518913d3e600d8195c94e859b6072467940e2e7a356b0513e7dd8dfa80f"
	I1119 22:49:52.685646 1034492 cri.go:89] found id: "a1d839c4dd761c3832961fa26ddbc6aeb0489bf5f7809ec819b48181e9a4fc16"
	I1119 22:49:52.685650 1034492 cri.go:89] found id: "4f23bd39b1b7b5a9e2324bcf5dc7394750a2e3300c282cc346f9fe063693c579"
	I1119 22:49:52.685660 1034492 cri.go:89] found id: "071f23bf7702e74d792529400c719dce663750e2ca40b7dfd1c27b2a0ce3c622"
	I1119 22:49:52.685664 1034492 cri.go:89] found id: "3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb"
	I1119 22:49:52.685668 1034492 cri.go:89] found id: "b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924"
	I1119 22:49:52.685671 1034492 cri.go:89] found id: "68a763f87346de7fced7d468257393247211fbec3248491d35175f813cedf14c"
	I1119 22:49:52.685676 1034492 cri.go:89] found id: "1b88acdd5193545bf45d9dbee41f2a40a0ea8fc645842332f6efc20238afc997"
	I1119 22:49:52.685684 1034492 cri.go:89] found id: ""
	I1119 22:49:52.685737 1034492 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:49:52.701375 1034492 out.go:203] 
	W1119 22:49:52.704470 1034492 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:49:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:49:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:49:52.704493 1034492 out.go:285] * 
	* 
	W1119 22:49:52.711466 1034492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:49:52.714536 1034492 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-743639 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-743639
helpers_test.go:243: (dbg) docker inspect pause-743639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2",
	        "Created": "2025-11-19T22:48:06.825712726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1028391,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:48:06.89573547Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2-json.log",
	        "Name": "/pause-743639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-743639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-743639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2",
	                "LowerDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-743639",
	                "Source": "/var/lib/docker/volumes/pause-743639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-743639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-743639",
	                "name.minikube.sigs.k8s.io": "pause-743639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e153bff2a9e7d1f7bb75e614f3aea4ecb9dfb06aa3e075f765751d5c6db7cf9",
	            "SandboxKey": "/var/run/docker/netns/6e153bff2a9e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-743639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:7e:b0:8e:a9:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f48b38c6e86e5dd0af19144d84febf77488cc9e3c0ba9242791a8c57aa1cb5ea",
	                    "EndpointID": "45d2df274d20312d52cec36704d806d1e20300dcc7d8538c7f9fbd3ff9e937d0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-743639",
	                        "8920cad255c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-743639 -n pause-743639
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-743639 -n pause-743639: exit status 2 (336.935331ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-743639 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-743639 logs -n 25: (1.469805577s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-482978 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:43 UTC │ 19 Nov 25 22:44 UTC │
	│ start   │ -p missing-upgrade-290352 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-290352    │ jenkins │ v1.32.0 │ 19 Nov 25 22:44 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:44 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p missing-upgrade-290352 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-290352    │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ delete  │ -p NoKubernetes-482978                                                                                                                   │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ ssh     │ -p NoKubernetes-482978 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │                     │
	│ stop    │ -p NoKubernetes-482978                                                                                                                   │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p NoKubernetes-482978 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ ssh     │ -p NoKubernetes-482978 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │                     │
	│ delete  │ -p NoKubernetes-482978                                                                                                                   │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:46 UTC │
	│ delete  │ -p missing-upgrade-290352                                                                                                                │ missing-upgrade-290352    │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p stopped-upgrade-196185 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-196185    │ jenkins │ v1.32.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:46 UTC │
	│ stop    │ -p kubernetes-upgrade-154655                                                                                                             │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:46 UTC │ 19 Nov 25 22:46 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:46 UTC │                     │
	│ stop    │ stopped-upgrade-196185 stop                                                                                                              │ stopped-upgrade-196185    │ jenkins │ v1.32.0 │ 19 Nov 25 22:46 UTC │ 19 Nov 25 22:46 UTC │
	│ start   │ -p stopped-upgrade-196185 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-196185    │ jenkins │ v1.37.0 │ 19 Nov 25 22:46 UTC │ 19 Nov 25 22:47 UTC │
	│ delete  │ -p stopped-upgrade-196185                                                                                                                │ stopped-upgrade-196185    │ jenkins │ v1.37.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:47 UTC │
	│ start   │ -p running-upgrade-770765 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-770765    │ jenkins │ v1.32.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:47 UTC │
	│ start   │ -p running-upgrade-770765 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-770765    │ jenkins │ v1.37.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:47 UTC │
	│ delete  │ -p running-upgrade-770765                                                                                                                │ running-upgrade-770765    │ jenkins │ v1.37.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:48 UTC │
	│ start   │ -p pause-743639 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-743639              │ jenkins │ v1.37.0 │ 19 Nov 25 22:48 UTC │ 19 Nov 25 22:49 UTC │
	│ start   │ -p pause-743639 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-743639              │ jenkins │ v1.37.0 │ 19 Nov 25 22:49 UTC │ 19 Nov 25 22:49 UTC │
	│ pause   │ -p pause-743639 --alsologtostderr -v=5                                                                                                   │ pause-743639              │ jenkins │ v1.37.0 │ 19 Nov 25 22:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:49:23
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:49:23.013720 1032597 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:49:23.013845 1032597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:49:23.013858 1032597 out.go:374] Setting ErrFile to fd 2...
	I1119 22:49:23.013863 1032597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:49:23.014163 1032597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:49:23.014626 1032597 out.go:368] Setting JSON to false
	I1119 22:49:23.015705 1032597 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16292,"bootTime":1763576271,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:49:23.015793 1032597 start.go:143] virtualization:  
	I1119 22:49:23.021068 1032597 out.go:179] * [pause-743639] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:49:23.024354 1032597 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:49:23.024400 1032597 notify.go:221] Checking for updates...
	I1119 22:49:23.031037 1032597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:49:23.033937 1032597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:49:23.036910 1032597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:49:23.039708 1032597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:49:23.042575 1032597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:49:23.045929 1032597 config.go:182] Loaded profile config "pause-743639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:49:23.046493 1032597 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:49:23.072420 1032597 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:49:23.072546 1032597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:49:23.174091 1032597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:49:23.160575272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:49:23.174189 1032597 docker.go:319] overlay module found
	I1119 22:49:23.177393 1032597 out.go:179] * Using the docker driver based on existing profile
	I1119 22:49:23.180643 1032597 start.go:309] selected driver: docker
	I1119 22:49:23.180672 1032597 start.go:930] validating driver "docker" against &{Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:49:23.180814 1032597 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:49:23.180917 1032597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:49:23.269708 1032597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:49:23.260163128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:49:23.270125 1032597 cni.go:84] Creating CNI manager for ""
	I1119 22:49:23.270186 1032597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:49:23.270240 1032597 start.go:353] cluster config:
	{Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:49:23.273981 1032597 out.go:179] * Starting "pause-743639" primary control-plane node in "pause-743639" cluster
	I1119 22:49:23.276907 1032597 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:49:23.279911 1032597 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:49:23.283758 1032597 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:49:23.283808 1032597 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 22:49:23.283818 1032597 cache.go:65] Caching tarball of preloaded images
	I1119 22:49:23.283853 1032597 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:49:23.283903 1032597 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 22:49:23.283912 1032597 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:49:23.284059 1032597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/config.json ...
	I1119 22:49:23.310635 1032597 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:49:23.310656 1032597 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:49:23.310669 1032597 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:49:23.310692 1032597 start.go:360] acquireMachinesLock for pause-743639: {Name:mkd6c51ef21d7a72c2cd2654b0e7a0088542c569 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:49:23.310744 1032597 start.go:364] duration metric: took 36.283µs to acquireMachinesLock for "pause-743639"
	I1119 22:49:23.310762 1032597 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:49:23.310768 1032597 fix.go:54] fixHost starting: 
	I1119 22:49:23.311141 1032597 cli_runner.go:164] Run: docker container inspect pause-743639 --format={{.State.Status}}
	I1119 22:49:23.336593 1032597 fix.go:112] recreateIfNeeded on pause-743639: state=Running err=<nil>
	W1119 22:49:23.336621 1032597 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:49:23.067002 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:23.079453 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:23.079528 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:23.136141 1019854 cri.go:89] found id: ""
	I1119 22:49:23.136165 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.136183 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:23.136190 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:23.136251 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:23.174067 1019854 cri.go:89] found id: ""
	I1119 22:49:23.174092 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.174101 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:23.174107 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:23.174167 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:23.207519 1019854 cri.go:89] found id: ""
	I1119 22:49:23.207567 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.207576 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:23.207582 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:23.207664 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:23.248357 1019854 cri.go:89] found id: ""
	I1119 22:49:23.248385 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.248394 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:23.248403 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:23.248462 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:23.289089 1019854 cri.go:89] found id: ""
	I1119 22:49:23.289110 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.289118 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:23.289197 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:23.289263 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:23.327575 1019854 cri.go:89] found id: ""
	I1119 22:49:23.327595 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.327603 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:23.327609 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:23.327665 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:23.369345 1019854 cri.go:89] found id: ""
	I1119 22:49:23.369368 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.369383 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:23.369390 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:23.369451 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:23.403759 1019854 cri.go:89] found id: ""
	I1119 22:49:23.403787 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.403796 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:23.403805 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:23.403823 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:23.453411 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:23.453436 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:23.574049 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:23.574125 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:23.593288 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:23.593316 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:23.683540 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:23.683560 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:23.683573 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:23.339835 1032597 out.go:252] * Updating the running docker "pause-743639" container ...
	I1119 22:49:23.339879 1032597 machine.go:94] provisionDockerMachine start ...
	I1119 22:49:23.339976 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:23.370831 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:23.371213 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:23.371226 1032597 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:49:23.534983 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-743639
	
	I1119 22:49:23.535009 1032597 ubuntu.go:182] provisioning hostname "pause-743639"
	I1119 22:49:23.535080 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:23.561541 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:23.561853 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:23.561872 1032597 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-743639 && echo "pause-743639" | sudo tee /etc/hostname
	I1119 22:49:23.737677 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-743639
	
	I1119 22:49:23.737778 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:23.757339 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:23.757658 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:23.757689 1032597 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-743639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-743639/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-743639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:49:23.899318 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:49:23.899347 1032597 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:49:23.899379 1032597 ubuntu.go:190] setting up certificates
	I1119 22:49:23.899389 1032597 provision.go:84] configureAuth start
	I1119 22:49:23.899466 1032597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-743639
	I1119 22:49:23.926076 1032597 provision.go:143] copyHostCerts
	I1119 22:49:23.926150 1032597 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:49:23.926170 1032597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:49:23.926271 1032597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:49:23.926394 1032597 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:49:23.926404 1032597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:49:23.926432 1032597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:49:23.926500 1032597 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:49:23.926512 1032597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:49:23.926538 1032597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:49:23.926608 1032597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.pause-743639 san=[127.0.0.1 192.168.85.2 localhost minikube pause-743639]
	I1119 22:49:24.669912 1032597 provision.go:177] copyRemoteCerts
	I1119 22:49:24.669982 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:49:24.670031 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:24.690632 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:24.795205 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:49:24.815442 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 22:49:24.834754 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:49:24.854251 1032597 provision.go:87] duration metric: took 954.829049ms to configureAuth
	I1119 22:49:24.854282 1032597 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:49:24.854531 1032597 config.go:182] Loaded profile config "pause-743639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:49:24.854639 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:24.872042 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:24.872375 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:24.872396 1032597 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:49:26.221725 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:26.232140 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:26.232224 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:26.257803 1019854 cri.go:89] found id: ""
	I1119 22:49:26.257828 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.257837 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:26.257843 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:26.257901 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:26.285478 1019854 cri.go:89] found id: ""
	I1119 22:49:26.285505 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.285514 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:26.285521 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:26.285584 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:26.310443 1019854 cri.go:89] found id: ""
	I1119 22:49:26.310468 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.310476 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:26.310483 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:26.310539 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:26.337770 1019854 cri.go:89] found id: ""
	I1119 22:49:26.337791 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.337799 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:26.337805 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:26.337863 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:26.363599 1019854 cri.go:89] found id: ""
	I1119 22:49:26.363624 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.363633 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:26.363640 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:26.363710 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:26.391201 1019854 cri.go:89] found id: ""
	I1119 22:49:26.391236 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.391246 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:26.391255 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:26.391330 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:26.416952 1019854 cri.go:89] found id: ""
	I1119 22:49:26.416975 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.416983 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:26.416989 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:26.417054 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:26.443492 1019854 cri.go:89] found id: ""
	I1119 22:49:26.443517 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.443526 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:26.443535 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:26.443563 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:26.557443 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:26.557483 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:26.575702 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:26.575732 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:26.644203 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:26.644220 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:26.644232 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:26.679599 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:26.679633 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:29.209240 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:29.219515 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:29.219599 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:29.245286 1019854 cri.go:89] found id: ""
	I1119 22:49:29.245309 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.245317 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:29.245324 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:29.245383 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:29.271738 1019854 cri.go:89] found id: ""
	I1119 22:49:29.271763 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.271772 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:29.271787 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:29.271869 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:29.306638 1019854 cri.go:89] found id: ""
	I1119 22:49:29.306663 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.306672 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:29.306678 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:29.306735 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:29.332438 1019854 cri.go:89] found id: ""
	I1119 22:49:29.332463 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.332472 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:29.332478 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:29.332541 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:29.361424 1019854 cri.go:89] found id: ""
	I1119 22:49:29.361448 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.361457 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:29.361463 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:29.361522 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:29.387494 1019854 cri.go:89] found id: ""
	I1119 22:49:29.387519 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.387527 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:29.387534 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:29.387593 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:29.413779 1019854 cri.go:89] found id: ""
	I1119 22:49:29.413804 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.413813 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:29.413827 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:29.413896 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:29.440065 1019854 cri.go:89] found id: ""
	I1119 22:49:29.440094 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.440104 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:29.440112 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:29.440124 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:29.552582 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:29.552617 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:29.569225 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:29.569253 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:29.635548 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:29.635568 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:29.635581 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:29.671905 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:29.671940 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:30.271359 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:49:30.271384 1032597 machine.go:97] duration metric: took 6.931496344s to provisionDockerMachine
	I1119 22:49:30.271412 1032597 start.go:293] postStartSetup for "pause-743639" (driver="docker")
	I1119 22:49:30.271426 1032597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:49:30.271495 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:49:30.271541 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.289734 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.390925 1032597 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:49:30.394521 1032597 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:49:30.394549 1032597 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:49:30.394561 1032597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:49:30.394618 1032597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:49:30.394709 1032597 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:49:30.394836 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:49:30.402795 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:49:30.421044 1032597 start.go:296] duration metric: took 149.613212ms for postStartSetup
	I1119 22:49:30.421172 1032597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:49:30.421221 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.438515 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.536202 1032597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:49:30.542468 1032597 fix.go:56] duration metric: took 7.231691723s for fixHost
	I1119 22:49:30.542510 1032597 start.go:83] releasing machines lock for "pause-743639", held for 7.231756528s
	I1119 22:49:30.542724 1032597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-743639
	I1119 22:49:30.560312 1032597 ssh_runner.go:195] Run: cat /version.json
	I1119 22:49:30.560354 1032597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:49:30.560366 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.560412 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.584127 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.588529 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.775603 1032597 ssh_runner.go:195] Run: systemctl --version
	I1119 22:49:30.782093 1032597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:49:30.821565 1032597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:49:30.826074 1032597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:49:30.826206 1032597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:49:30.834038 1032597 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:49:30.834115 1032597 start.go:496] detecting cgroup driver to use...
	I1119 22:49:30.834153 1032597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:49:30.834207 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:49:30.849700 1032597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:49:30.863332 1032597 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:49:30.863463 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:49:30.879486 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:49:30.892955 1032597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:49:31.031013 1032597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:49:31.164761 1032597 docker.go:234] disabling docker service ...
	I1119 22:49:31.164878 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:49:31.179966 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:49:31.193343 1032597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:49:31.331886 1032597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:49:31.463340 1032597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:49:31.476660 1032597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:49:31.492270 1032597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:49:31.492387 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.501591 1032597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:49:31.501661 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.510738 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.519910 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.529006 1032597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:49:31.537833 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.547038 1032597 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.556394 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.565326 1032597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:49:31.572876 1032597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:49:31.580644 1032597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:49:31.716529 1032597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:49:31.928963 1032597 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:49:31.929086 1032597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:49:31.933519 1032597 start.go:564] Will wait 60s for crictl version
	I1119 22:49:31.933586 1032597 ssh_runner.go:195] Run: which crictl
	I1119 22:49:31.937128 1032597 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:49:31.969342 1032597 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:49:31.969445 1032597 ssh_runner.go:195] Run: crio --version
	I1119 22:49:31.998117 1032597 ssh_runner.go:195] Run: crio --version
	I1119 22:49:32.031007 1032597 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:49:32.033947 1032597 cli_runner.go:164] Run: docker network inspect pause-743639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:49:32.051127 1032597 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:49:32.056512 1032597 kubeadm.go:884] updating cluster {Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:49:32.056659 1032597 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:49:32.056767 1032597 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:49:32.091393 1032597 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:49:32.091416 1032597 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:49:32.091472 1032597 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:49:32.118343 1032597 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:49:32.118365 1032597 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:49:32.118382 1032597 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:49:32.118494 1032597 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-743639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:49:32.118583 1032597 ssh_runner.go:195] Run: crio config
	I1119 22:49:32.179928 1032597 cni.go:84] Creating CNI manager for ""
	I1119 22:49:32.179955 1032597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:49:32.179973 1032597 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:49:32.179998 1032597 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-743639 NodeName:pause-743639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:49:32.180119 1032597 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-743639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:49:32.180201 1032597 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:49:32.188450 1032597 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:49:32.188580 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:49:32.196148 1032597 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1119 22:49:32.211588 1032597 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:49:32.228731 1032597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 22:49:32.243678 1032597 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:49:32.248775 1032597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:49:32.427818 1032597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:49:32.444334 1032597 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639 for IP: 192.168.85.2
	I1119 22:49:32.444395 1032597 certs.go:195] generating shared ca certs ...
	I1119 22:49:32.444451 1032597 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:49:32.444668 1032597 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 22:49:32.444749 1032597 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 22:49:32.444786 1032597 certs.go:257] generating profile certs ...
	I1119 22:49:32.444936 1032597 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.key
	I1119 22:49:32.445044 1032597 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/apiserver.key.69c82afc
	I1119 22:49:32.445130 1032597 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/proxy-client.key
	I1119 22:49:32.445323 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 22:49:32.445395 1032597 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 22:49:32.445423 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:49:32.445487 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:49:32.445553 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:49:32.445614 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 22:49:32.445717 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:49:32.446620 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:49:32.468890 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:49:32.491420 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:49:32.512789 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 22:49:32.546778 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 22:49:32.570780 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:49:32.592293 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:49:32.616422 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:49:32.640803 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:49:32.661795 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 22:49:32.682153 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 22:49:32.704567 1032597 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:49:32.720555 1032597 ssh_runner.go:195] Run: openssl version
	I1119 22:49:32.727880 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:49:32.738832 1032597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:49:32.743181 1032597 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:49:32.743300 1032597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:49:32.788291 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:49:32.797867 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 22:49:32.807039 1032597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 22:49:32.811892 1032597 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 22:49:32.812008 1032597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 22:49:32.853983 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 22:49:32.862049 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 22:49:32.870689 1032597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 22:49:32.874573 1032597 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 22:49:32.874647 1032597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 22:49:32.915919 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:49:32.924547 1032597 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:49:32.928548 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:49:32.969604 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:49:33.011433 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:49:33.058371 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:49:33.117393 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:49:33.219423 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:49:33.303018 1032597 kubeadm.go:401] StartCluster: {Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:49:33.303129 1032597 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:49:33.303195 1032597 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:49:33.369271 1032597 cri.go:89] found id: "f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec"
	I1119 22:49:33.369294 1032597 cri.go:89] found id: "16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8"
	I1119 22:49:33.369300 1032597 cri.go:89] found id: "ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd"
	I1119 22:49:33.369303 1032597 cri.go:89] found id: "f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27"
	I1119 22:49:33.369307 1032597 cri.go:89] found id: "11c8fd6c6f9713d038f953487476bba6ffaa917653a267e0712298f3cab327c5"
	I1119 22:49:33.369311 1032597 cri.go:89] found id: "41ef975cdc8623c00414594a8d3d94a614a8ef3fc161f854937339028de9efcb"
	I1119 22:49:33.369314 1032597 cri.go:89] found id: "6a370518913d3e600d8195c94e859b6072467940e2e7a356b0513e7dd8dfa80f"
	I1119 22:49:33.369318 1032597 cri.go:89] found id: "a1d839c4dd761c3832961fa26ddbc6aeb0489bf5f7809ec819b48181e9a4fc16"
	I1119 22:49:33.369321 1032597 cri.go:89] found id: "4f23bd39b1b7b5a9e2324bcf5dc7394750a2e3300c282cc346f9fe063693c579"
	I1119 22:49:33.369329 1032597 cri.go:89] found id: "071f23bf7702e74d792529400c719dce663750e2ca40b7dfd1c27b2a0ce3c622"
	I1119 22:49:33.369332 1032597 cri.go:89] found id: "3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb"
	I1119 22:49:33.369336 1032597 cri.go:89] found id: "b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924"
	I1119 22:49:33.369339 1032597 cri.go:89] found id: "68a763f87346de7fced7d468257393247211fbec3248491d35175f813cedf14c"
	I1119 22:49:33.369342 1032597 cri.go:89] found id: "1b88acdd5193545bf45d9dbee41f2a40a0ea8fc645842332f6efc20238afc997"
	I1119 22:49:33.369345 1032597 cri.go:89] found id: ""
	I1119 22:49:33.369396 1032597 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:49:33.390729 1032597 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:49:33Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:49:33.390815 1032597 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:49:33.406860 1032597 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:49:33.406978 1032597 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:49:33.407031 1032597 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:49:33.421184 1032597 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:49:33.421797 1032597 kubeconfig.go:125] found "pause-743639" server: "https://192.168.85.2:8443"
	I1119 22:49:33.422575 1032597 kapi.go:59] client config for pause-743639: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.key", CAFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:49:33.423073 1032597 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:49:33.423090 1032597 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:49:33.423096 1032597 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:49:33.423100 1032597 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:49:33.423105 1032597 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:49:33.423385 1032597 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:49:33.435282 1032597 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:49:33.435315 1032597 kubeadm.go:602] duration metric: took 28.330196ms to restartPrimaryControlPlane
	I1119 22:49:33.435324 1032597 kubeadm.go:403] duration metric: took 132.315654ms to StartCluster
	I1119 22:49:33.435344 1032597 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:49:33.435406 1032597 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:49:33.436325 1032597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:49:33.436542 1032597 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:49:33.436862 1032597 config.go:182] Loaded profile config "pause-743639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:49:33.436914 1032597 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:49:33.440256 1032597 out.go:179] * Enabled addons: 
	I1119 22:49:33.440335 1032597 out.go:179] * Verifying Kubernetes components...
	I1119 22:49:32.201673 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:32.213435 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:32.213505 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:32.249085 1019854 cri.go:89] found id: ""
	I1119 22:49:32.249105 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.249113 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:32.249119 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:32.249168 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:32.279551 1019854 cri.go:89] found id: ""
	I1119 22:49:32.279578 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.279586 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:32.279593 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:32.279650 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:32.322078 1019854 cri.go:89] found id: ""
	I1119 22:49:32.322105 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.322120 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:32.322127 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:32.322182 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:32.365337 1019854 cri.go:89] found id: ""
	I1119 22:49:32.365365 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.365374 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:32.365381 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:32.365441 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:32.400969 1019854 cri.go:89] found id: ""
	I1119 22:49:32.400991 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.401001 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:32.401008 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:32.401076 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:32.433775 1019854 cri.go:89] found id: ""
	I1119 22:49:32.433798 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.433807 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:32.433813 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:32.433872 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:32.471730 1019854 cri.go:89] found id: ""
	I1119 22:49:32.471752 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.471760 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:32.471767 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:32.471823 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:32.508404 1019854 cri.go:89] found id: ""
	I1119 22:49:32.508426 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.508435 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:32.508444 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:32.508455 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:32.549902 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:32.550123 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:32.586180 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:32.586259 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:32.717324 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:32.717364 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:32.736861 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:32.736897 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:32.824331 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:35.324833 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:35.335009 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:35.335082 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:35.369443 1019854 cri.go:89] found id: ""
	I1119 22:49:35.369468 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.369477 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:35.369483 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:35.369550 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:35.408360 1019854 cri.go:89] found id: ""
	I1119 22:49:35.408386 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.408410 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:35.408416 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:35.408481 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:35.445232 1019854 cri.go:89] found id: ""
	I1119 22:49:35.445260 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.445269 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:35.445275 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:35.445344 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:35.495987 1019854 cri.go:89] found id: ""
	I1119 22:49:35.496012 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.496020 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:35.496026 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:35.496084 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:35.549058 1019854 cri.go:89] found id: ""
	I1119 22:49:35.549084 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.549093 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:35.549099 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:35.549158 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:33.443319 1032597 addons.go:515] duration metric: took 6.385727ms for enable addons: enabled=[]
	I1119 22:49:33.443413 1032597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:49:33.714005 1032597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:49:33.735247 1032597 node_ready.go:35] waiting up to 6m0s for node "pause-743639" to be "Ready" ...
	I1119 22:49:35.612534 1019854 cri.go:89] found id: ""
	I1119 22:49:35.612560 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.612570 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:35.612576 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:35.612639 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:35.657484 1019854 cri.go:89] found id: ""
	I1119 22:49:35.657511 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.657521 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:35.657527 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:35.657600 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:35.704997 1019854 cri.go:89] found id: ""
	I1119 22:49:35.705022 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.705031 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:35.705040 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:35.705052 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:35.723771 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:35.723802 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:35.814124 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:35.814145 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:35.814158 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:35.866356 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:35.870967 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:35.924406 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:35.924436 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:38.591335 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:38.601635 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:38.601710 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:38.632460 1019854 cri.go:89] found id: ""
	I1119 22:49:38.632486 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.632495 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:38.632502 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:38.632568 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:38.659100 1019854 cri.go:89] found id: ""
	I1119 22:49:38.659126 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.659135 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:38.659141 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:38.659200 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:38.685689 1019854 cri.go:89] found id: ""
	I1119 22:49:38.685715 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.685723 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:38.685730 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:38.685790 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:38.720867 1019854 cri.go:89] found id: ""
	I1119 22:49:38.720893 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.720901 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:38.720908 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:38.720966 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:38.746825 1019854 cri.go:89] found id: ""
	I1119 22:49:38.746851 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.746861 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:38.746895 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:38.746957 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:38.774045 1019854 cri.go:89] found id: ""
	I1119 22:49:38.774071 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.774081 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:38.774088 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:38.774148 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:38.800777 1019854 cri.go:89] found id: ""
	I1119 22:49:38.800802 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.800812 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:38.800818 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:38.800878 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:38.827313 1019854 cri.go:89] found id: ""
	I1119 22:49:38.827337 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.827346 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:38.827355 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:38.827370 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:38.947330 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:38.947369 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:38.963922 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:38.963948 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:39.048624 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:39.048684 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:39.048723 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:39.086765 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:39.086842 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:38.181483 1032597 node_ready.go:49] node "pause-743639" is "Ready"
	I1119 22:49:38.181513 1032597 node_ready.go:38] duration metric: took 4.446235636s for node "pause-743639" to be "Ready" ...
	I1119 22:49:38.181528 1032597 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:49:38.181589 1032597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:38.200730 1032597 api_server.go:72] duration metric: took 4.764150099s to wait for apiserver process to appear ...
	I1119 22:49:38.200751 1032597 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:49:38.200772 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:38.210226 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:49:38.210269 1032597 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:49:38.700894 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:38.709643 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:49:38.709673 1032597 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:49:39.200870 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:39.211842 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:49:39.211934 1032597 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:49:39.701567 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:39.710946 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:49:39.712036 1032597 api_server.go:141] control plane version: v1.34.1
	I1119 22:49:39.712062 1032597 api_server.go:131] duration metric: took 1.511302538s to wait for apiserver health ...
	I1119 22:49:39.712071 1032597 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:49:39.716754 1032597 system_pods.go:59] 7 kube-system pods found
	I1119 22:49:39.716790 1032597 system_pods.go:61] "coredns-66bc5c9577-snvrx" [ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:49:39.716801 1032597 system_pods.go:61] "etcd-pause-743639" [619d36e9-e393-4b99-9e1a-9139b0c405e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:49:39.716807 1032597 system_pods.go:61] "kindnet-9dzb9" [9eefb432-a68a-4f03-8e51-b3137d193739] Running
	I1119 22:49:39.716814 1032597 system_pods.go:61] "kube-apiserver-pause-743639" [b9839362-ec86-4f5b-ac52-d73251fa6223] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:49:39.716824 1032597 system_pods.go:61] "kube-controller-manager-pause-743639" [90695bd3-b7bd-460c-832a-9aea9b830258] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:49:39.716832 1032597 system_pods.go:61] "kube-proxy-jgn2m" [d654ae0c-812e-4423-82c1-860a834c4e1a] Running
	I1119 22:49:39.716839 1032597 system_pods.go:61] "kube-scheduler-pause-743639" [8e0d994d-cf46-4c79-b5c3-c883edc46a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:49:39.716853 1032597 system_pods.go:74] duration metric: took 4.775133ms to wait for pod list to return data ...
	I1119 22:49:39.716864 1032597 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:49:39.719464 1032597 default_sa.go:45] found service account: "default"
	I1119 22:49:39.719492 1032597 default_sa.go:55] duration metric: took 2.617341ms for default service account to be created ...
	I1119 22:49:39.719503 1032597 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:49:39.818712 1032597 system_pods.go:86] 7 kube-system pods found
	I1119 22:49:39.818747 1032597 system_pods.go:89] "coredns-66bc5c9577-snvrx" [ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:49:39.818757 1032597 system_pods.go:89] "etcd-pause-743639" [619d36e9-e393-4b99-9e1a-9139b0c405e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:49:39.818763 1032597 system_pods.go:89] "kindnet-9dzb9" [9eefb432-a68a-4f03-8e51-b3137d193739] Running
	I1119 22:49:39.818771 1032597 system_pods.go:89] "kube-apiserver-pause-743639" [b9839362-ec86-4f5b-ac52-d73251fa6223] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:49:39.818778 1032597 system_pods.go:89] "kube-controller-manager-pause-743639" [90695bd3-b7bd-460c-832a-9aea9b830258] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:49:39.818790 1032597 system_pods.go:89] "kube-proxy-jgn2m" [d654ae0c-812e-4423-82c1-860a834c4e1a] Running
	I1119 22:49:39.818798 1032597 system_pods.go:89] "kube-scheduler-pause-743639" [8e0d994d-cf46-4c79-b5c3-c883edc46a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:49:39.818809 1032597 system_pods.go:126] duration metric: took 99.299779ms to wait for k8s-apps to be running ...
	I1119 22:49:39.818818 1032597 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:49:39.818903 1032597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:49:39.833402 1032597 system_svc.go:56] duration metric: took 14.567609ms WaitForService to wait for kubelet
	I1119 22:49:39.833490 1032597 kubeadm.go:587] duration metric: took 6.396914281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:49:39.833534 1032597 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:49:39.836931 1032597 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:49:39.837022 1032597 node_conditions.go:123] node cpu capacity is 2
	I1119 22:49:39.837065 1032597 node_conditions.go:105] duration metric: took 3.482573ms to run NodePressure ...
	I1119 22:49:39.837118 1032597 start.go:242] waiting for startup goroutines ...
	I1119 22:49:39.837144 1032597 start.go:247] waiting for cluster config update ...
	I1119 22:49:39.837167 1032597 start.go:256] writing updated cluster config ...
	I1119 22:49:39.837593 1032597 ssh_runner.go:195] Run: rm -f paused
	I1119 22:49:39.842506 1032597 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:49:39.843289 1032597 kapi.go:59] client config for pause-743639: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.key", CAFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:49:39.915653 1032597 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-snvrx" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:49:41.941048 1032597 pod_ready.go:104] pod "coredns-66bc5c9577-snvrx" is not "Ready", error: <nil>
	I1119 22:49:41.641969 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:41.652128 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:41.652232 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:41.686260 1019854 cri.go:89] found id: ""
	I1119 22:49:41.686284 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.686293 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:41.686299 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:41.686359 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:41.712059 1019854 cri.go:89] found id: ""
	I1119 22:49:41.712085 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.712094 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:41.712101 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:41.712159 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:41.736890 1019854 cri.go:89] found id: ""
	I1119 22:49:41.736913 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.736921 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:41.736927 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:41.736985 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:41.771635 1019854 cri.go:89] found id: ""
	I1119 22:49:41.771710 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.771727 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:41.771735 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:41.771829 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:41.801594 1019854 cri.go:89] found id: ""
	I1119 22:49:41.801619 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.801628 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:41.801635 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:41.801742 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:41.833305 1019854 cri.go:89] found id: ""
	I1119 22:49:41.833331 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.833340 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:41.833347 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:41.833404 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:41.867907 1019854 cri.go:89] found id: ""
	I1119 22:49:41.867932 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.867940 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:41.867946 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:41.868008 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:41.895614 1019854 cri.go:89] found id: ""
	I1119 22:49:41.895637 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.895646 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:41.895654 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:41.895666 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:42.025527 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:42.025570 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:42.046313 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:42.046396 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:42.122229 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:42.122309 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:42.122343 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:42.164158 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:42.164213 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:44.712362 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:44.722904 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:44.722977 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:44.749177 1019854 cri.go:89] found id: ""
	I1119 22:49:44.749203 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.749213 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:44.749224 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:44.749285 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:44.775904 1019854 cri.go:89] found id: ""
	I1119 22:49:44.775967 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.775982 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:44.775990 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:44.776051 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:44.802018 1019854 cri.go:89] found id: ""
	I1119 22:49:44.802045 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.802054 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:44.802069 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:44.802164 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:44.827730 1019854 cri.go:89] found id: ""
	I1119 22:49:44.827755 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.827763 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:44.827770 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:44.827828 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:44.852953 1019854 cri.go:89] found id: ""
	I1119 22:49:44.852980 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.853001 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:44.853010 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:44.853088 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:44.879011 1019854 cri.go:89] found id: ""
	I1119 22:49:44.879080 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.879096 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:44.879104 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:44.879169 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:44.905266 1019854 cri.go:89] found id: ""
	I1119 22:49:44.905291 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.905300 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:44.905307 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:44.905363 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:44.933040 1019854 cri.go:89] found id: ""
	I1119 22:49:44.933067 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.933077 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:44.933086 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:44.933118 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:45.051676 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:45.051720 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:45.113602 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:45.113643 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:45.233125 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:45.233158 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:45.233174 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:45.293121 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:45.293234 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1119 22:49:44.420795 1032597 pod_ready.go:104] pod "coredns-66bc5c9577-snvrx" is not "Ready", error: <nil>
	I1119 22:49:45.422169 1032597 pod_ready.go:94] pod "coredns-66bc5c9577-snvrx" is "Ready"
	I1119 22:49:45.422199 1032597 pod_ready.go:86] duration metric: took 5.506515642s for pod "coredns-66bc5c9577-snvrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:45.425024 1032597 pod_ready.go:83] waiting for pod "etcd-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:49:47.430047 1032597 pod_ready.go:104] pod "etcd-pause-743639" is not "Ready", error: <nil>
	I1119 22:49:47.828207 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:47.838423 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:47.838492 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:47.864605 1019854 cri.go:89] found id: ""
	I1119 22:49:47.864631 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.864640 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:47.864647 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:47.864704 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:47.892588 1019854 cri.go:89] found id: ""
	I1119 22:49:47.892614 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.892624 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:47.892631 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:47.892689 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:47.922453 1019854 cri.go:89] found id: ""
	I1119 22:49:47.922481 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.922490 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:47.922496 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:47.922558 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:47.955943 1019854 cri.go:89] found id: ""
	I1119 22:49:47.955967 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.955976 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:47.955983 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:47.956047 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:47.981115 1019854 cri.go:89] found id: ""
	I1119 22:49:47.981139 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.981148 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:47.981154 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:47.981212 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:48.010915 1019854 cri.go:89] found id: ""
	I1119 22:49:48.011002 1019854 logs.go:282] 0 containers: []
	W1119 22:49:48.011027 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:48.011070 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:48.011185 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:48.040637 1019854 cri.go:89] found id: ""
	I1119 22:49:48.040662 1019854 logs.go:282] 0 containers: []
	W1119 22:49:48.040670 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:48.040677 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:48.040745 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:48.068988 1019854 cri.go:89] found id: ""
	I1119 22:49:48.069012 1019854 logs.go:282] 0 containers: []
	W1119 22:49:48.069021 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:48.069031 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:48.069042 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:48.087406 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:48.087439 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:48.159586 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:48.159606 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:48.159621 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:48.196781 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:48.196818 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:48.226499 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:48.226530 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1119 22:49:49.430582 1032597 pod_ready.go:104] pod "etcd-pause-743639" is not "Ready", error: <nil>
	I1119 22:49:49.931420 1032597 pod_ready.go:94] pod "etcd-pause-743639" is "Ready"
	I1119 22:49:49.931450 1032597 pod_ready.go:86] duration metric: took 4.506395557s for pod "etcd-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.934090 1032597 pod_ready.go:83] waiting for pod "kube-apiserver-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.938735 1032597 pod_ready.go:94] pod "kube-apiserver-pause-743639" is "Ready"
	I1119 22:49:49.938763 1032597 pod_ready.go:86] duration metric: took 4.645171ms for pod "kube-apiserver-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.941198 1032597 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.945856 1032597 pod_ready.go:94] pod "kube-controller-manager-pause-743639" is "Ready"
	I1119 22:49:49.945890 1032597 pod_ready.go:86] duration metric: took 4.652776ms for pod "kube-controller-manager-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.948615 1032597 pod_ready.go:83] waiting for pod "kube-proxy-jgn2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.129213 1032597 pod_ready.go:94] pod "kube-proxy-jgn2m" is "Ready"
	I1119 22:49:50.129238 1032597 pod_ready.go:86] duration metric: took 180.593627ms for pod "kube-proxy-jgn2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.329697 1032597 pod_ready.go:83] waiting for pod "kube-scheduler-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.729618 1032597 pod_ready.go:94] pod "kube-scheduler-pause-743639" is "Ready"
	I1119 22:49:50.729648 1032597 pod_ready.go:86] duration metric: took 399.919343ms for pod "kube-scheduler-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.729662 1032597 pod_ready.go:40] duration metric: took 10.887073278s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:49:50.780815 1032597 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:49:50.784029 1032597 out.go:179] * Done! kubectl is now configured to use "pause-743639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.234483221Z" level=info msg="Starting container: f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27" id=fa67e669-5e22-4ca8-8cb1-babf20b1eb78 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.238461258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.245142282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.275545444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.27565555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.285283937Z" level=info msg="Started container" PID=2332 containerID=ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd description=kube-system/coredns-66bc5c9577-snvrx/coredns id=4eaaa63f-15b3-4a26-b958-227fa57e7d19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a66d74ba6fb86098b801952e1c1254a565784b97003aab8dc79acb3a7fba3d8
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.286190663Z" level=info msg="Started container" PID=2312 containerID=f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27 description=kube-system/kube-proxy-jgn2m/kube-proxy id=fa67e669-5e22-4ca8-8cb1-babf20b1eb78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=089669e1b30c9b098f031b40c18d40f1ad0bef4c14aa54ef15f3d8e989f03bf1
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.349708088Z" level=info msg="Created container f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec: kube-system/kube-apiserver-pause-743639/kube-apiserver" id=df59cb0e-9054-411d-91fe-d418ff9e56eb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.353230654Z" level=info msg="Starting container: f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec" id=f6ac5b2c-5be7-4181-b601-47645c0f8187 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.355406817Z" level=info msg="Started container" PID=2357 containerID=f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec description=kube-system/kube-apiserver-pause-743639/kube-apiserver id=f6ac5b2c-5be7-4181-b601-47645c0f8187 name=/runtime.v1.RuntimeService/StartContainer sandboxID=40163f2081742cd55d2ea3a53ae6841efbd943d899b170bb6703d485973bb119
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.355787654Z" level=info msg="Created container 16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8: kube-system/kube-scheduler-pause-743639/kube-scheduler" id=9f9e2f65-7020-4a4d-ba2e-29f1f84406ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.359166349Z" level=info msg="Starting container: 16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8" id=d0a74123-211f-4554-84a3-d9df8e90e489 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.362488912Z" level=info msg="Started container" PID=2356 containerID=16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8 description=kube-system/kube-scheduler-pause-743639/kube-scheduler id=d0a74123-211f-4554-84a3-d9df8e90e489 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c52e799b5c75a34bb69320597a2de0215f10bd99fa1c3834af5d3cbc55bb230
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.556505805Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.561011634Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.561047926Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.561077424Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.565254684Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.565290483Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.565316132Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.569519468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.569554792Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.569579777Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.573594475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.573628855Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f8d6acd5f2bf9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   40163f2081742       kube-apiserver-pause-743639            kube-system
	16516a6a1f73d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   5c52e799b5c75       kube-scheduler-pause-743639            kube-system
	ab284bf12b2e5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   9a66d74ba6fb8       coredns-66bc5c9577-snvrx               kube-system
	f7a24f16a1401       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   089669e1b30c9       kube-proxy-jgn2m                       kube-system
	11c8fd6c6f971       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   ba46f803358b0       kindnet-9dzb9                          kube-system
	41ef975cdc862       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   84e9cd17c2fef       etcd-pause-743639                      kube-system
	6a370518913d3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   37f021fe5295f       kube-controller-manager-pause-743639   kube-system
	a1d839c4dd761       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   9a66d74ba6fb8       coredns-66bc5c9577-snvrx               kube-system
	4f23bd39b1b7b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   ba46f803358b0       kindnet-9dzb9                          kube-system
	071f23bf7702e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   089669e1b30c9       kube-proxy-jgn2m                       kube-system
	3971476aca248       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5c52e799b5c75       kube-scheduler-pause-743639            kube-system
	b2d0c49923463       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   40163f2081742       kube-apiserver-pause-743639            kube-system
	68a763f87346d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   84e9cd17c2fef       etcd-pause-743639                      kube-system
	1b88acdd51935       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   37f021fe5295f       kube-controller-manager-pause-743639   kube-system
	
	
	==> coredns [a1d839c4dd761c3832961fa26ddbc6aeb0489bf5f7809ec819b48181e9a4fc16] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53986 - 57958 "HINFO IN 800128124084991565.8302540441610945696. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015927599s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45051 - 60938 "HINFO IN 4120834022136133682.5908664279437778434. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009566619s
	
	
	==> describe nodes <==
	Name:               pause-743639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-743639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=pause-743639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_48_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:48:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-743639
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:49:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:48:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:48:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:48:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:49:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-743639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                c25ea14b-509d-4342-a6f3-0f68227de082
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-snvrx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-743639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-9dzb9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-743639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-743639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-jgn2m                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-743639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 74s   kube-proxy       
	  Normal   Starting                 15s   kube-proxy       
	  Normal   Starting                 80s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 80s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-743639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-743639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-743639 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           76s   node-controller  Node pause-743639 event: Registered Node pause-743639 in Controller
	  Normal   NodeReady                33s   kubelet          Node pause-743639 status is now: NodeReady
	  Normal   RegisteredNode           12s   node-controller  Node pause-743639 event: Registered Node pause-743639 in Controller
	
	
	==> dmesg <==
	[ +33.914297] overlayfs: idmapped layers are currently not supported
	[Nov19 22:22] overlayfs: idmapped layers are currently not supported
	[Nov19 22:23] overlayfs: idmapped layers are currently not supported
	[  +3.200978] overlayfs: idmapped layers are currently not supported
	[Nov19 22:24] overlayfs: idmapped layers are currently not supported
	[ +20.253339] overlayfs: idmapped layers are currently not supported
	[Nov19 22:26] overlayfs: idmapped layers are currently not supported
	[Nov19 22:31] overlayfs: idmapped layers are currently not supported
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [41ef975cdc8623c00414594a8d3d94a614a8ef3fc161f854937339028de9efcb] <==
	{"level":"warn","ts":"2025-11-19T22:49:36.490583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.499054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.532092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.563295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.610675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.653640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.683441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.712891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.731481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.771437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.787832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.817211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.848323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.879385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.910110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.940223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.977977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.997497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.023134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.058102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.094932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.136059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.154139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.176425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.294544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	
	
	==> etcd [68a763f87346de7fced7d468257393247211fbec3248491d35175f813cedf14c] <==
	{"level":"warn","ts":"2025-11-19T22:48:29.493524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.513589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.532799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.554891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.572755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.591133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.665475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51644","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:49:25.048127Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-19T22:49:25.048194Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-743639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-19T22:49:25.048302Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T22:49:25.050740Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T22:49:25.198100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T22:49:25.198164Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-19T22:49:25.198231Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-19T22:49:25.198249Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198295Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198367Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T22:49:25.198401Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198486Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198507Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T22:49:25.198525Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T22:49:25.201429Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-19T22:49:25.201507Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T22:49:25.201563Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T22:49:25.201594Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-743639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 22:49:54 up  4:32,  0 user,  load average: 1.88, 2.48, 2.13
	Linux pause-743639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11c8fd6c6f9713d038f953487476bba6ffaa917653a267e0712298f3cab327c5] <==
	I1119 22:49:33.321129       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:49:33.338500       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:49:33.338647       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:49:33.338660       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:49:33.338674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:49:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:49:33.553370       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:49:33.553537       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:49:33.553571       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:49:33.556861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:49:38.256573       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:49:38.256633       1 metrics.go:72] Registering metrics
	I1119 22:49:38.256700       1 controller.go:711] "Syncing nftables rules"
	I1119 22:49:43.556030       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:49:43.556161       1 main.go:301] handling current node
	I1119 22:49:53.554970       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:49:53.555001       1 main.go:301] handling current node
	
	
	==> kindnet [4f23bd39b1b7b5a9e2324bcf5dc7394750a2e3300c282cc346f9fe063693c579] <==
	I1119 22:48:39.727695       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:48:39.728041       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:48:39.728216       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:48:39.728259       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:48:39.728295       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:48:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:48:39.932297       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:48:39.932371       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:48:39.932410       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:48:39.932784       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:49:09.932211       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:49:09.932363       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:49:09.933108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:49:09.933240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:49:11.332603       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:49:11.332645       1 metrics.go:72] Registering metrics
	I1119 22:49:11.332727       1 controller.go:711] "Syncing nftables rules"
	I1119 22:49:19.934945       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:49:19.935006       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924] <==
	W1119 22:49:25.065856       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.065912       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.065957       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.066411       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.066460       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.066500       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.067614       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.068308       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069162       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069195       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069241       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069281       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069437       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069749       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069780       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069803       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069937       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069998       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070042       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070225       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070287       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070292       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070319       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070321       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070344       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec] <==
	I1119 22:49:38.143626       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:49:38.143650       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:49:38.143657       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:49:38.143664       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:49:38.152269       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:49:38.152583       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 22:49:38.153944       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 22:49:38.154061       1 policy_source.go:240] refreshing policies
	I1119 22:49:38.168930       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:49:38.171092       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:49:38.173386       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:49:38.177043       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:49:38.179799       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:49:38.180599       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 22:49:38.180626       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 22:49:38.180719       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1119 22:49:38.201079       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:49:38.203791       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:49:38.222917       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:49:38.899587       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:49:40.137943       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:49:41.518263       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:49:41.769179       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:49:41.819554       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:49:41.934106       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [1b88acdd5193545bf45d9dbee41f2a40a0ea8fc645842332f6efc20238afc997] <==
	I1119 22:48:37.658979       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-743639" podCIDRs=["10.244.0.0/24"]
	I1119 22:48:37.663833       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:48:37.664155       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:48:37.664203       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:48:37.667856       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:48:37.672916       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:48:37.674107       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:48:37.674799       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:48:37.675040       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:48:37.675657       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:48:37.675731       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:48:37.676315       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:48:37.676635       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:48:37.676693       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:48:37.676774       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:48:37.676841       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-743639"
	I1119 22:48:37.676877       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:48:37.676912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:48:37.676955       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:48:37.676649       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 22:48:37.687513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:48:37.688513       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:48:37.697917       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:48:38.935362       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1119 22:49:22.687143       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [6a370518913d3e600d8195c94e859b6072467940e2e7a356b0513e7dd8dfa80f] <==
	I1119 22:49:41.519336       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:49:41.519367       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:49:41.519407       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:49:41.519523       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:49:41.520951       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:49:41.525850       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:49:41.525970       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:49:41.528183       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:49:41.531293       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:49:41.534942       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:49:41.536095       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:49:41.560667       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:49:41.561941       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:49:41.562012       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:49:41.562034       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:49:41.562130       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-743639"
	I1119 22:49:41.562235       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:49:41.562305       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:49:41.562378       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:49:41.562447       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:49:41.562495       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:49:41.562531       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:49:41.563754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:49:41.563811       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:49:41.563841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	
	
	==> kube-proxy [071f23bf7702e74d792529400c719dce663750e2ca40b7dfd1c27b2a0ce3c622] <==
	I1119 22:48:39.630054       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:48:39.728472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:48:39.833680       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:48:39.833808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:48:39.833906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:48:39.852011       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:48:39.852071       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:48:39.856306       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:48:39.856657       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:48:39.856691       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:48:39.858047       1 config.go:200] "Starting service config controller"
	I1119 22:48:39.858067       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:48:39.858090       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:48:39.858094       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:48:39.858107       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:48:39.858111       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:48:39.858730       1 config.go:309] "Starting node config controller"
	I1119 22:48:39.858749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:48:39.858757       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:48:39.959080       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:48:39.961171       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:48:39.958955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27] <==
	I1119 22:49:35.354257       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:49:37.363852       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:49:38.323389       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:49:38.323942       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:49:38.324082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:49:38.361703       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:49:38.361801       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:49:38.368159       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:49:38.368499       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:49:38.368521       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:49:38.370791       1 config.go:200] "Starting service config controller"
	I1119 22:49:38.370951       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:49:38.373452       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:49:38.374801       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:49:38.374123       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:49:38.374947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:49:38.374518       1 config.go:309] "Starting node config controller"
	I1119 22:49:38.375034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:49:38.375062       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:49:38.471466       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:49:38.475756       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:49:38.475767       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8] <==
	I1119 22:49:37.050785       1 serving.go:386] Generated self-signed cert in-memory
	W1119 22:49:37.922778       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 22:49:37.922806       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:49:37.922817       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 22:49:37.922824       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 22:49:38.128703       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:49:38.128822       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:49:38.139018       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:38.139124       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:38.139837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:49:38.139910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:49:38.239732       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb] <==
	E1119 22:48:30.795949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:48:30.795962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:48:30.796000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:48:30.796042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:48:30.796154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:48:30.796230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:48:30.796287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:48:30.796361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:48:30.796394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:48:30.796434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:48:31.627298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:48:31.649600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:48:31.721067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:48:31.776955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:48:31.808617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:48:31.839300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:48:31.889331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:48:31.935256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1119 22:48:34.673526       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:25.048875       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1119 22:49:25.048971       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1119 22:49:25.048982       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1119 22:49:25.048999       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:25.049158       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1119 22:49:25.049173       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 19 22:49:33 pause-743639 kubelet[1314]: I1119 22:49:33.118551    1314 scope.go:117] "RemoveContainer" containerID="3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120214    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d013a4bd471877f784cd773ff63a572d" pod="kube-system/kube-scheduler-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120506    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d47521cb7b698fc949ff87e3718c9f3c" pod="kube-system/etcd-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120723    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b544df01855519c2eb101ce142a4d90b" pod="kube-system/kube-apiserver-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120905    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6db6c11330d02f32f392522f8c4e329a" pod="kube-system/kube-controller-manager-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.121072    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgn2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d654ae0c-812e-4423-82c1-860a834c4e1a" pod="kube-system/kube-proxy-jgn2m"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.121279    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9dzb9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9eefb432-a68a-4f03-8e51-b3137d193739" pod="kube-system/kindnet-9dzb9"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.121438    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-snvrx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb" pod="kube-system/coredns-66bc5c9577-snvrx"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: I1119 22:49:33.160743    1314 scope.go:117] "RemoveContainer" containerID="b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.161324    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b544df01855519c2eb101ce142a4d90b" pod="kube-system/kube-apiserver-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163262    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6db6c11330d02f32f392522f8c4e329a" pod="kube-system/kube-controller-manager-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163456    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgn2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d654ae0c-812e-4423-82c1-860a834c4e1a" pod="kube-system/kube-proxy-jgn2m"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163620    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9dzb9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9eefb432-a68a-4f03-8e51-b3137d193739" pod="kube-system/kindnet-9dzb9"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163780    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-snvrx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb" pod="kube-system/coredns-66bc5c9577-snvrx"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163936    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d013a4bd471877f784cd773ff63a572d" pod="kube-system/kube-scheduler-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.164093    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d47521cb7b698fc949ff87e3718c9f3c" pod="kube-system/etcd-pause-743639"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.073702    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-snvrx\" is forbidden: User \"system:node:pause-743639\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" podUID="ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb" pod="kube-system/coredns-66bc5c9577-snvrx"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.074708    1314 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-743639\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.074949    1314 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-743639\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.079052    1314 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-743639\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.102178    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-743639\" is forbidden: User \"system:node:pause-743639\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" podUID="d013a4bd471877f784cd773ff63a572d" pod="kube-system/kube-scheduler-pause-743639"
	Nov 19 22:49:44 pause-743639 kubelet[1314]: W1119 22:49:44.057065    1314 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 19 22:49:51 pause-743639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:49:51 pause-743639 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:49:51 pause-743639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-743639 -n pause-743639
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-743639 -n pause-743639: exit status 2 (358.873686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-743639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-743639
helpers_test.go:243: (dbg) docker inspect pause-743639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2",
	        "Created": "2025-11-19T22:48:06.825712726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1028391,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:48:06.89573547Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2/8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2-json.log",
	        "Name": "/pause-743639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-743639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-743639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8920cad255c56e8528cfe11d0f3e3d34aa48da3d86cb3f92389dcfce53ef13a2",
	                "LowerDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fdec4d73326c4971c7a5c693977e726873efe4a48e325807dec4f56f82b9a55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-743639",
	                "Source": "/var/lib/docker/volumes/pause-743639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-743639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-743639",
	                "name.minikube.sigs.k8s.io": "pause-743639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e153bff2a9e7d1f7bb75e614f3aea4ecb9dfb06aa3e075f765751d5c6db7cf9",
	            "SandboxKey": "/var/run/docker/netns/6e153bff2a9e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-743639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:7e:b0:8e:a9:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f48b38c6e86e5dd0af19144d84febf77488cc9e3c0ba9242791a8c57aa1cb5ea",
	                    "EndpointID": "45d2df274d20312d52cec36704d806d1e20300dcc7d8538c7f9fbd3ff9e937d0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-743639",
	                        "8920cad255c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-743639 -n pause-743639
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-743639 -n pause-743639: exit status 2 (345.962145ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-743639 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-743639 logs -n 25: (1.350340967s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-482978 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:43 UTC │ 19 Nov 25 22:44 UTC │
	│ start   │ -p missing-upgrade-290352 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-290352    │ jenkins │ v1.32.0 │ 19 Nov 25 22:44 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:44 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p missing-upgrade-290352 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-290352    │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ delete  │ -p NoKubernetes-482978                                                                                                                   │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ ssh     │ -p NoKubernetes-482978 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │                     │
	│ stop    │ -p NoKubernetes-482978                                                                                                                   │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p NoKubernetes-482978 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ ssh     │ -p NoKubernetes-482978 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │                     │
	│ delete  │ -p NoKubernetes-482978                                                                                                                   │ NoKubernetes-482978       │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:46 UTC │
	│ delete  │ -p missing-upgrade-290352                                                                                                                │ missing-upgrade-290352    │ jenkins │ v1.37.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:45 UTC │
	│ start   │ -p stopped-upgrade-196185 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-196185    │ jenkins │ v1.32.0 │ 19 Nov 25 22:45 UTC │ 19 Nov 25 22:46 UTC │
	│ stop    │ -p kubernetes-upgrade-154655                                                                                                             │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:46 UTC │ 19 Nov 25 22:46 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:46 UTC │                     │
	│ stop    │ stopped-upgrade-196185 stop                                                                                                              │ stopped-upgrade-196185    │ jenkins │ v1.32.0 │ 19 Nov 25 22:46 UTC │ 19 Nov 25 22:46 UTC │
	│ start   │ -p stopped-upgrade-196185 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-196185    │ jenkins │ v1.37.0 │ 19 Nov 25 22:46 UTC │ 19 Nov 25 22:47 UTC │
	│ delete  │ -p stopped-upgrade-196185                                                                                                                │ stopped-upgrade-196185    │ jenkins │ v1.37.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:47 UTC │
	│ start   │ -p running-upgrade-770765 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-770765    │ jenkins │ v1.32.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:47 UTC │
	│ start   │ -p running-upgrade-770765 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-770765    │ jenkins │ v1.37.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:47 UTC │
	│ delete  │ -p running-upgrade-770765                                                                                                                │ running-upgrade-770765    │ jenkins │ v1.37.0 │ 19 Nov 25 22:47 UTC │ 19 Nov 25 22:48 UTC │
	│ start   │ -p pause-743639 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-743639              │ jenkins │ v1.37.0 │ 19 Nov 25 22:48 UTC │ 19 Nov 25 22:49 UTC │
	│ start   │ -p pause-743639 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-743639              │ jenkins │ v1.37.0 │ 19 Nov 25 22:49 UTC │ 19 Nov 25 22:49 UTC │
	│ pause   │ -p pause-743639 --alsologtostderr -v=5                                                                                                   │ pause-743639              │ jenkins │ v1.37.0 │ 19 Nov 25 22:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:49:23
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:49:23.013720 1032597 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:49:23.013845 1032597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:49:23.013858 1032597 out.go:374] Setting ErrFile to fd 2...
	I1119 22:49:23.013863 1032597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:49:23.014163 1032597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:49:23.014626 1032597 out.go:368] Setting JSON to false
	I1119 22:49:23.015705 1032597 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16292,"bootTime":1763576271,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:49:23.015793 1032597 start.go:143] virtualization:  
	I1119 22:49:23.021068 1032597 out.go:179] * [pause-743639] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:49:23.024354 1032597 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:49:23.024400 1032597 notify.go:221] Checking for updates...
	I1119 22:49:23.031037 1032597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:49:23.033937 1032597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:49:23.036910 1032597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:49:23.039708 1032597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:49:23.042575 1032597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:49:23.045929 1032597 config.go:182] Loaded profile config "pause-743639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:49:23.046493 1032597 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:49:23.072420 1032597 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:49:23.072546 1032597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:49:23.174091 1032597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:49:23.160575272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:49:23.174189 1032597 docker.go:319] overlay module found
	I1119 22:49:23.177393 1032597 out.go:179] * Using the docker driver based on existing profile
	I1119 22:49:23.180643 1032597 start.go:309] selected driver: docker
	I1119 22:49:23.180672 1032597 start.go:930] validating driver "docker" against &{Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:49:23.180814 1032597 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:49:23.180917 1032597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:49:23.269708 1032597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:49:23.260163128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:49:23.270125 1032597 cni.go:84] Creating CNI manager for ""
	I1119 22:49:23.270186 1032597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:49:23.270240 1032597 start.go:353] cluster config:
	{Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:49:23.273981 1032597 out.go:179] * Starting "pause-743639" primary control-plane node in "pause-743639" cluster
	I1119 22:49:23.276907 1032597 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:49:23.279911 1032597 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:49:23.283758 1032597 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:49:23.283808 1032597 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 22:49:23.283818 1032597 cache.go:65] Caching tarball of preloaded images
	I1119 22:49:23.283853 1032597 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:49:23.283903 1032597 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 22:49:23.283912 1032597 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:49:23.284059 1032597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/config.json ...
	I1119 22:49:23.310635 1032597 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:49:23.310656 1032597 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:49:23.310669 1032597 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:49:23.310692 1032597 start.go:360] acquireMachinesLock for pause-743639: {Name:mkd6c51ef21d7a72c2cd2654b0e7a0088542c569 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:49:23.310744 1032597 start.go:364] duration metric: took 36.283µs to acquireMachinesLock for "pause-743639"
	I1119 22:49:23.310762 1032597 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:49:23.310768 1032597 fix.go:54] fixHost starting: 
	I1119 22:49:23.311141 1032597 cli_runner.go:164] Run: docker container inspect pause-743639 --format={{.State.Status}}
	I1119 22:49:23.336593 1032597 fix.go:112] recreateIfNeeded on pause-743639: state=Running err=<nil>
	W1119 22:49:23.336621 1032597 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:49:23.067002 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:23.079453 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:23.079528 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:23.136141 1019854 cri.go:89] found id: ""
	I1119 22:49:23.136165 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.136183 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:23.136190 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:23.136251 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:23.174067 1019854 cri.go:89] found id: ""
	I1119 22:49:23.174092 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.174101 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:23.174107 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:23.174167 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:23.207519 1019854 cri.go:89] found id: ""
	I1119 22:49:23.207567 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.207576 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:23.207582 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:23.207664 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:23.248357 1019854 cri.go:89] found id: ""
	I1119 22:49:23.248385 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.248394 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:23.248403 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:23.248462 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:23.289089 1019854 cri.go:89] found id: ""
	I1119 22:49:23.289110 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.289118 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:23.289197 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:23.289263 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:23.327575 1019854 cri.go:89] found id: ""
	I1119 22:49:23.327595 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.327603 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:23.327609 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:23.327665 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:23.369345 1019854 cri.go:89] found id: ""
	I1119 22:49:23.369368 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.369383 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:23.369390 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:23.369451 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:23.403759 1019854 cri.go:89] found id: ""
	I1119 22:49:23.403787 1019854 logs.go:282] 0 containers: []
	W1119 22:49:23.403796 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:23.403805 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:23.403823 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:23.453411 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:23.453436 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:23.574049 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:23.574125 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:23.593288 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:23.593316 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:23.683540 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:23.683560 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:23.683573 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:23.339835 1032597 out.go:252] * Updating the running docker "pause-743639" container ...
	I1119 22:49:23.339879 1032597 machine.go:94] provisionDockerMachine start ...
	I1119 22:49:23.339976 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:23.370831 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:23.371213 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:23.371226 1032597 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:49:23.534983 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-743639
	
	I1119 22:49:23.535009 1032597 ubuntu.go:182] provisioning hostname "pause-743639"
	I1119 22:49:23.535080 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:23.561541 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:23.561853 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:23.561872 1032597 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-743639 && echo "pause-743639" | sudo tee /etc/hostname
	I1119 22:49:23.737677 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-743639
	
	I1119 22:49:23.737778 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:23.757339 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:23.757658 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:23.757689 1032597 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-743639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-743639/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-743639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:49:23.899318 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:49:23.899347 1032597 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:49:23.899379 1032597 ubuntu.go:190] setting up certificates
	I1119 22:49:23.899389 1032597 provision.go:84] configureAuth start
	I1119 22:49:23.899466 1032597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-743639
	I1119 22:49:23.926076 1032597 provision.go:143] copyHostCerts
	I1119 22:49:23.926150 1032597 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:49:23.926170 1032597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:49:23.926271 1032597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:49:23.926394 1032597 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:49:23.926404 1032597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:49:23.926432 1032597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:49:23.926500 1032597 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:49:23.926512 1032597 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:49:23.926538 1032597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:49:23.926608 1032597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.pause-743639 san=[127.0.0.1 192.168.85.2 localhost minikube pause-743639]
	I1119 22:49:24.669912 1032597 provision.go:177] copyRemoteCerts
	I1119 22:49:24.669982 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:49:24.670031 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:24.690632 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:24.795205 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:49:24.815442 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 22:49:24.834754 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:49:24.854251 1032597 provision.go:87] duration metric: took 954.829049ms to configureAuth
	I1119 22:49:24.854282 1032597 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:49:24.854531 1032597 config.go:182] Loaded profile config "pause-743639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:49:24.854639 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:24.872042 1032597 main.go:143] libmachine: Using SSH client type: native
	I1119 22:49:24.872375 1032597 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33816 <nil> <nil>}
	I1119 22:49:24.872396 1032597 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:49:26.221725 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:26.232140 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:26.232224 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:26.257803 1019854 cri.go:89] found id: ""
	I1119 22:49:26.257828 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.257837 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:26.257843 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:26.257901 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:26.285478 1019854 cri.go:89] found id: ""
	I1119 22:49:26.285505 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.285514 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:26.285521 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:26.285584 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:26.310443 1019854 cri.go:89] found id: ""
	I1119 22:49:26.310468 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.310476 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:26.310483 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:26.310539 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:26.337770 1019854 cri.go:89] found id: ""
	I1119 22:49:26.337791 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.337799 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:26.337805 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:26.337863 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:26.363599 1019854 cri.go:89] found id: ""
	I1119 22:49:26.363624 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.363633 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:26.363640 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:26.363710 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:26.391201 1019854 cri.go:89] found id: ""
	I1119 22:49:26.391236 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.391246 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:26.391255 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:26.391330 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:26.416952 1019854 cri.go:89] found id: ""
	I1119 22:49:26.416975 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.416983 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:26.416989 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:26.417054 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:26.443492 1019854 cri.go:89] found id: ""
	I1119 22:49:26.443517 1019854 logs.go:282] 0 containers: []
	W1119 22:49:26.443526 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:26.443535 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:26.443563 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:26.557443 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:26.557483 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:26.575702 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:26.575732 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:26.644203 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:26.644220 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:26.644232 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:26.679599 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:26.679633 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:29.209240 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:29.219515 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:29.219599 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:29.245286 1019854 cri.go:89] found id: ""
	I1119 22:49:29.245309 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.245317 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:29.245324 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:29.245383 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:29.271738 1019854 cri.go:89] found id: ""
	I1119 22:49:29.271763 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.271772 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:29.271787 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:29.271869 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:29.306638 1019854 cri.go:89] found id: ""
	I1119 22:49:29.306663 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.306672 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:29.306678 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:29.306735 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:29.332438 1019854 cri.go:89] found id: ""
	I1119 22:49:29.332463 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.332472 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:29.332478 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:29.332541 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:29.361424 1019854 cri.go:89] found id: ""
	I1119 22:49:29.361448 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.361457 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:29.361463 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:29.361522 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:29.387494 1019854 cri.go:89] found id: ""
	I1119 22:49:29.387519 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.387527 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:29.387534 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:29.387593 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:29.413779 1019854 cri.go:89] found id: ""
	I1119 22:49:29.413804 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.413813 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:29.413827 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:29.413896 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:29.440065 1019854 cri.go:89] found id: ""
	I1119 22:49:29.440094 1019854 logs.go:282] 0 containers: []
	W1119 22:49:29.440104 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:29.440112 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:29.440124 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:29.552582 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:29.552617 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:29.569225 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:29.569253 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:29.635548 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:29.635568 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:29.635581 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:29.671905 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:29.671940 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:30.271359 1032597 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:49:30.271384 1032597 machine.go:97] duration metric: took 6.931496344s to provisionDockerMachine
	I1119 22:49:30.271412 1032597 start.go:293] postStartSetup for "pause-743639" (driver="docker")
	I1119 22:49:30.271426 1032597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:49:30.271495 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:49:30.271541 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.289734 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.390925 1032597 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:49:30.394521 1032597 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:49:30.394549 1032597 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:49:30.394561 1032597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:49:30.394618 1032597 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:49:30.394709 1032597 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:49:30.394836 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:49:30.402795 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:49:30.421044 1032597 start.go:296] duration metric: took 149.613212ms for postStartSetup
	I1119 22:49:30.421172 1032597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:49:30.421221 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.438515 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.536202 1032597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:49:30.542468 1032597 fix.go:56] duration metric: took 7.231691723s for fixHost
	I1119 22:49:30.542510 1032597 start.go:83] releasing machines lock for "pause-743639", held for 7.231756528s
	I1119 22:49:30.542724 1032597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-743639
	I1119 22:49:30.560312 1032597 ssh_runner.go:195] Run: cat /version.json
	I1119 22:49:30.560354 1032597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:49:30.560366 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.560412 1032597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-743639
	I1119 22:49:30.584127 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.588529 1032597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/pause-743639/id_rsa Username:docker}
	I1119 22:49:30.775603 1032597 ssh_runner.go:195] Run: systemctl --version
	I1119 22:49:30.782093 1032597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:49:30.821565 1032597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:49:30.826074 1032597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:49:30.826206 1032597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:49:30.834038 1032597 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:49:30.834115 1032597 start.go:496] detecting cgroup driver to use...
	I1119 22:49:30.834153 1032597 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:49:30.834207 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:49:30.849700 1032597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:49:30.863332 1032597 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:49:30.863463 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:49:30.879486 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:49:30.892955 1032597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:49:31.031013 1032597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:49:31.164761 1032597 docker.go:234] disabling docker service ...
	I1119 22:49:31.164878 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:49:31.179966 1032597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:49:31.193343 1032597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:49:31.331886 1032597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:49:31.463340 1032597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:49:31.476660 1032597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:49:31.492270 1032597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:49:31.492387 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.501591 1032597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:49:31.501661 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.510738 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.519910 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.529006 1032597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:49:31.537833 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.547038 1032597 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.556394 1032597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:49:31.565326 1032597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:49:31.572876 1032597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:49:31.580644 1032597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:49:31.716529 1032597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:49:31.928963 1032597 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:49:31.929086 1032597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:49:31.933519 1032597 start.go:564] Will wait 60s for crictl version
	I1119 22:49:31.933586 1032597 ssh_runner.go:195] Run: which crictl
	I1119 22:49:31.937128 1032597 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:49:31.969342 1032597 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:49:31.969445 1032597 ssh_runner.go:195] Run: crio --version
	I1119 22:49:31.998117 1032597 ssh_runner.go:195] Run: crio --version
	I1119 22:49:32.031007 1032597 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:49:32.033947 1032597 cli_runner.go:164] Run: docker network inspect pause-743639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:49:32.051127 1032597 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:49:32.056512 1032597 kubeadm.go:884] updating cluster {Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:49:32.056659 1032597 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:49:32.056767 1032597 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:49:32.091393 1032597 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:49:32.091416 1032597 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:49:32.091472 1032597 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:49:32.118343 1032597 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:49:32.118365 1032597 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:49:32.118382 1032597 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:49:32.118494 1032597 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-743639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:49:32.118583 1032597 ssh_runner.go:195] Run: crio config
	I1119 22:49:32.179928 1032597 cni.go:84] Creating CNI manager for ""
	I1119 22:49:32.179955 1032597 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:49:32.179973 1032597 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:49:32.179998 1032597 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-743639 NodeName:pause-743639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:49:32.180119 1032597 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-743639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:49:32.180201 1032597 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:49:32.188450 1032597 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:49:32.188580 1032597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:49:32.196148 1032597 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1119 22:49:32.211588 1032597 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:49:32.228731 1032597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 22:49:32.243678 1032597 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:49:32.248775 1032597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:49:32.427818 1032597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:49:32.444334 1032597 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639 for IP: 192.168.85.2
	I1119 22:49:32.444395 1032597 certs.go:195] generating shared ca certs ...
	I1119 22:49:32.444451 1032597 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:49:32.444668 1032597 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 22:49:32.444749 1032597 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 22:49:32.444786 1032597 certs.go:257] generating profile certs ...
	I1119 22:49:32.444936 1032597 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.key
	I1119 22:49:32.445044 1032597 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/apiserver.key.69c82afc
	I1119 22:49:32.445130 1032597 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/proxy-client.key
	I1119 22:49:32.445323 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 22:49:32.445395 1032597 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 22:49:32.445423 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:49:32.445487 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:49:32.445553 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:49:32.445614 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 22:49:32.445717 1032597 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:49:32.446620 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:49:32.468890 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:49:32.491420 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:49:32.512789 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 22:49:32.546778 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 22:49:32.570780 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:49:32.592293 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:49:32.616422 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:49:32.640803 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:49:32.661795 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 22:49:32.682153 1032597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 22:49:32.704567 1032597 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:49:32.720555 1032597 ssh_runner.go:195] Run: openssl version
	I1119 22:49:32.727880 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:49:32.738832 1032597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:49:32.743181 1032597 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:49:32.743300 1032597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:49:32.788291 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:49:32.797867 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 22:49:32.807039 1032597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 22:49:32.811892 1032597 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 22:49:32.812008 1032597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 22:49:32.853983 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 22:49:32.862049 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 22:49:32.870689 1032597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 22:49:32.874573 1032597 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 22:49:32.874647 1032597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 22:49:32.915919 1032597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:49:32.924547 1032597 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:49:32.928548 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:49:32.969604 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:49:33.011433 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:49:33.058371 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:49:33.117393 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:49:33.219423 1032597 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:49:33.303018 1032597 kubeadm.go:401] StartCluster: {Name:pause-743639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-743639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:49:33.303129 1032597 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:49:33.303195 1032597 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:49:33.369271 1032597 cri.go:89] found id: "f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec"
	I1119 22:49:33.369294 1032597 cri.go:89] found id: "16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8"
	I1119 22:49:33.369300 1032597 cri.go:89] found id: "ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd"
	I1119 22:49:33.369303 1032597 cri.go:89] found id: "f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27"
	I1119 22:49:33.369307 1032597 cri.go:89] found id: "11c8fd6c6f9713d038f953487476bba6ffaa917653a267e0712298f3cab327c5"
	I1119 22:49:33.369311 1032597 cri.go:89] found id: "41ef975cdc8623c00414594a8d3d94a614a8ef3fc161f854937339028de9efcb"
	I1119 22:49:33.369314 1032597 cri.go:89] found id: "6a370518913d3e600d8195c94e859b6072467940e2e7a356b0513e7dd8dfa80f"
	I1119 22:49:33.369318 1032597 cri.go:89] found id: "a1d839c4dd761c3832961fa26ddbc6aeb0489bf5f7809ec819b48181e9a4fc16"
	I1119 22:49:33.369321 1032597 cri.go:89] found id: "4f23bd39b1b7b5a9e2324bcf5dc7394750a2e3300c282cc346f9fe063693c579"
	I1119 22:49:33.369329 1032597 cri.go:89] found id: "071f23bf7702e74d792529400c719dce663750e2ca40b7dfd1c27b2a0ce3c622"
	I1119 22:49:33.369332 1032597 cri.go:89] found id: "3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb"
	I1119 22:49:33.369336 1032597 cri.go:89] found id: "b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924"
	I1119 22:49:33.369339 1032597 cri.go:89] found id: "68a763f87346de7fced7d468257393247211fbec3248491d35175f813cedf14c"
	I1119 22:49:33.369342 1032597 cri.go:89] found id: "1b88acdd5193545bf45d9dbee41f2a40a0ea8fc645842332f6efc20238afc997"
	I1119 22:49:33.369345 1032597 cri.go:89] found id: ""
	I1119 22:49:33.369396 1032597 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:49:33.390729 1032597 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:49:33Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:49:33.390815 1032597 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:49:33.406860 1032597 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:49:33.406978 1032597 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:49:33.407031 1032597 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:49:33.421184 1032597 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:49:33.421797 1032597 kubeconfig.go:125] found "pause-743639" server: "https://192.168.85.2:8443"
	I1119 22:49:33.422575 1032597 kapi.go:59] client config for pause-743639: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.key", CAFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:49:33.423073 1032597 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:49:33.423090 1032597 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:49:33.423096 1032597 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:49:33.423100 1032597 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:49:33.423105 1032597 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:49:33.423385 1032597 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:49:33.435282 1032597 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:49:33.435315 1032597 kubeadm.go:602] duration metric: took 28.330196ms to restartPrimaryControlPlane
	I1119 22:49:33.435324 1032597 kubeadm.go:403] duration metric: took 132.315654ms to StartCluster
	I1119 22:49:33.435344 1032597 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:49:33.435406 1032597 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:49:33.436325 1032597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:49:33.436542 1032597 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:49:33.436862 1032597 config.go:182] Loaded profile config "pause-743639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:49:33.436914 1032597 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:49:33.440256 1032597 out.go:179] * Enabled addons: 
	I1119 22:49:33.440335 1032597 out.go:179] * Verifying Kubernetes components...
	I1119 22:49:32.201673 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:32.213435 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:32.213505 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:32.249085 1019854 cri.go:89] found id: ""
	I1119 22:49:32.249105 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.249113 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:32.249119 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:32.249168 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:32.279551 1019854 cri.go:89] found id: ""
	I1119 22:49:32.279578 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.279586 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:32.279593 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:32.279650 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:32.322078 1019854 cri.go:89] found id: ""
	I1119 22:49:32.322105 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.322120 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:32.322127 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:32.322182 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:32.365337 1019854 cri.go:89] found id: ""
	I1119 22:49:32.365365 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.365374 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:32.365381 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:32.365441 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:32.400969 1019854 cri.go:89] found id: ""
	I1119 22:49:32.400991 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.401001 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:32.401008 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:32.401076 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:32.433775 1019854 cri.go:89] found id: ""
	I1119 22:49:32.433798 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.433807 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:32.433813 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:32.433872 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:32.471730 1019854 cri.go:89] found id: ""
	I1119 22:49:32.471752 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.471760 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:32.471767 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:32.471823 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:32.508404 1019854 cri.go:89] found id: ""
	I1119 22:49:32.508426 1019854 logs.go:282] 0 containers: []
	W1119 22:49:32.508435 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:32.508444 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:32.508455 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:32.549902 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:32.550123 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:32.586180 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:32.586259 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:32.717324 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:32.717364 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:32.736861 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:32.736897 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:32.824331 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:35.324833 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:35.335009 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:35.335082 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:35.369443 1019854 cri.go:89] found id: ""
	I1119 22:49:35.369468 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.369477 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:35.369483 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:35.369550 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:35.408360 1019854 cri.go:89] found id: ""
	I1119 22:49:35.408386 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.408410 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:35.408416 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:35.408481 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:35.445232 1019854 cri.go:89] found id: ""
	I1119 22:49:35.445260 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.445269 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:35.445275 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:35.445344 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:35.495987 1019854 cri.go:89] found id: ""
	I1119 22:49:35.496012 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.496020 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:35.496026 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:35.496084 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:35.549058 1019854 cri.go:89] found id: ""
	I1119 22:49:35.549084 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.549093 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:35.549099 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:35.549158 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:33.443319 1032597 addons.go:515] duration metric: took 6.385727ms for enable addons: enabled=[]
	I1119 22:49:33.443413 1032597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:49:33.714005 1032597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:49:33.735247 1032597 node_ready.go:35] waiting up to 6m0s for node "pause-743639" to be "Ready" ...
	I1119 22:49:35.612534 1019854 cri.go:89] found id: ""
	I1119 22:49:35.612560 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.612570 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:35.612576 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:35.612639 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:35.657484 1019854 cri.go:89] found id: ""
	I1119 22:49:35.657511 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.657521 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:35.657527 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:35.657600 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:35.704997 1019854 cri.go:89] found id: ""
	I1119 22:49:35.705022 1019854 logs.go:282] 0 containers: []
	W1119 22:49:35.705031 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:35.705040 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:35.705052 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:35.723771 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:35.723802 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:35.814124 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:35.814145 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:35.814158 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:35.866356 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:35.870967 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:35.924406 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:35.924436 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:38.591335 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:38.601635 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:38.601710 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:38.632460 1019854 cri.go:89] found id: ""
	I1119 22:49:38.632486 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.632495 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:38.632502 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:38.632568 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:38.659100 1019854 cri.go:89] found id: ""
	I1119 22:49:38.659126 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.659135 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:38.659141 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:38.659200 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:38.685689 1019854 cri.go:89] found id: ""
	I1119 22:49:38.685715 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.685723 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:38.685730 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:38.685790 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:38.720867 1019854 cri.go:89] found id: ""
	I1119 22:49:38.720893 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.720901 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:38.720908 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:38.720966 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:38.746825 1019854 cri.go:89] found id: ""
	I1119 22:49:38.746851 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.746861 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:38.746895 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:38.746957 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:38.774045 1019854 cri.go:89] found id: ""
	I1119 22:49:38.774071 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.774081 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:38.774088 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:38.774148 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:38.800777 1019854 cri.go:89] found id: ""
	I1119 22:49:38.800802 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.800812 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:38.800818 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:38.800878 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:38.827313 1019854 cri.go:89] found id: ""
	I1119 22:49:38.827337 1019854 logs.go:282] 0 containers: []
	W1119 22:49:38.827346 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:38.827355 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:38.827370 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:38.947330 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:38.947369 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:38.963922 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:38.963948 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:39.048624 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:39.048684 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:39.048723 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:39.086765 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:39.086842 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:38.181483 1032597 node_ready.go:49] node "pause-743639" is "Ready"
	I1119 22:49:38.181513 1032597 node_ready.go:38] duration metric: took 4.446235636s for node "pause-743639" to be "Ready" ...
	I1119 22:49:38.181528 1032597 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:49:38.181589 1032597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:38.200730 1032597 api_server.go:72] duration metric: took 4.764150099s to wait for apiserver process to appear ...
	I1119 22:49:38.200751 1032597 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:49:38.200772 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:38.210226 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:49:38.210269 1032597 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:49:38.700894 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:38.709643 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:49:38.709673 1032597 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:49:39.200870 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:39.211842 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:49:39.211934 1032597 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:49:39.701567 1032597 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:49:39.710946 1032597 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:49:39.712036 1032597 api_server.go:141] control plane version: v1.34.1
	I1119 22:49:39.712062 1032597 api_server.go:131] duration metric: took 1.511302538s to wait for apiserver health ...
	I1119 22:49:39.712071 1032597 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:49:39.716754 1032597 system_pods.go:59] 7 kube-system pods found
	I1119 22:49:39.716790 1032597 system_pods.go:61] "coredns-66bc5c9577-snvrx" [ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:49:39.716801 1032597 system_pods.go:61] "etcd-pause-743639" [619d36e9-e393-4b99-9e1a-9139b0c405e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:49:39.716807 1032597 system_pods.go:61] "kindnet-9dzb9" [9eefb432-a68a-4f03-8e51-b3137d193739] Running
	I1119 22:49:39.716814 1032597 system_pods.go:61] "kube-apiserver-pause-743639" [b9839362-ec86-4f5b-ac52-d73251fa6223] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:49:39.716824 1032597 system_pods.go:61] "kube-controller-manager-pause-743639" [90695bd3-b7bd-460c-832a-9aea9b830258] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:49:39.716832 1032597 system_pods.go:61] "kube-proxy-jgn2m" [d654ae0c-812e-4423-82c1-860a834c4e1a] Running
	I1119 22:49:39.716839 1032597 system_pods.go:61] "kube-scheduler-pause-743639" [8e0d994d-cf46-4c79-b5c3-c883edc46a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:49:39.716853 1032597 system_pods.go:74] duration metric: took 4.775133ms to wait for pod list to return data ...
	I1119 22:49:39.716864 1032597 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:49:39.719464 1032597 default_sa.go:45] found service account: "default"
	I1119 22:49:39.719492 1032597 default_sa.go:55] duration metric: took 2.617341ms for default service account to be created ...
	I1119 22:49:39.719503 1032597 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:49:39.818712 1032597 system_pods.go:86] 7 kube-system pods found
	I1119 22:49:39.818747 1032597 system_pods.go:89] "coredns-66bc5c9577-snvrx" [ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:49:39.818757 1032597 system_pods.go:89] "etcd-pause-743639" [619d36e9-e393-4b99-9e1a-9139b0c405e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:49:39.818763 1032597 system_pods.go:89] "kindnet-9dzb9" [9eefb432-a68a-4f03-8e51-b3137d193739] Running
	I1119 22:49:39.818771 1032597 system_pods.go:89] "kube-apiserver-pause-743639" [b9839362-ec86-4f5b-ac52-d73251fa6223] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:49:39.818778 1032597 system_pods.go:89] "kube-controller-manager-pause-743639" [90695bd3-b7bd-460c-832a-9aea9b830258] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:49:39.818790 1032597 system_pods.go:89] "kube-proxy-jgn2m" [d654ae0c-812e-4423-82c1-860a834c4e1a] Running
	I1119 22:49:39.818798 1032597 system_pods.go:89] "kube-scheduler-pause-743639" [8e0d994d-cf46-4c79-b5c3-c883edc46a78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:49:39.818809 1032597 system_pods.go:126] duration metric: took 99.299779ms to wait for k8s-apps to be running ...
	I1119 22:49:39.818818 1032597 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:49:39.818903 1032597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:49:39.833402 1032597 system_svc.go:56] duration metric: took 14.567609ms WaitForService to wait for kubelet
	I1119 22:49:39.833490 1032597 kubeadm.go:587] duration metric: took 6.396914281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:49:39.833534 1032597 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:49:39.836931 1032597 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:49:39.837022 1032597 node_conditions.go:123] node cpu capacity is 2
	I1119 22:49:39.837065 1032597 node_conditions.go:105] duration metric: took 3.482573ms to run NodePressure ...
	I1119 22:49:39.837118 1032597 start.go:242] waiting for startup goroutines ...
	I1119 22:49:39.837144 1032597 start.go:247] waiting for cluster config update ...
	I1119 22:49:39.837167 1032597 start.go:256] writing updated cluster config ...
	I1119 22:49:39.837593 1032597 ssh_runner.go:195] Run: rm -f paused
	I1119 22:49:39.842506 1032597 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:49:39.843289 1032597 kapi.go:59] client config for pause-743639: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/profiles/pause-743639/client.key", CAFile:"/home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:49:39.915653 1032597 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-snvrx" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:49:41.941048 1032597 pod_ready.go:104] pod "coredns-66bc5c9577-snvrx" is not "Ready", error: <nil>
	I1119 22:49:41.641969 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:41.652128 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:41.652232 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:41.686260 1019854 cri.go:89] found id: ""
	I1119 22:49:41.686284 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.686293 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:41.686299 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:41.686359 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:41.712059 1019854 cri.go:89] found id: ""
	I1119 22:49:41.712085 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.712094 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:41.712101 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:41.712159 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:41.736890 1019854 cri.go:89] found id: ""
	I1119 22:49:41.736913 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.736921 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:41.736927 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:41.736985 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:41.771635 1019854 cri.go:89] found id: ""
	I1119 22:49:41.771710 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.771727 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:41.771735 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:41.771829 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:41.801594 1019854 cri.go:89] found id: ""
	I1119 22:49:41.801619 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.801628 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:41.801635 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:41.801742 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:41.833305 1019854 cri.go:89] found id: ""
	I1119 22:49:41.833331 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.833340 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:41.833347 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:41.833404 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:41.867907 1019854 cri.go:89] found id: ""
	I1119 22:49:41.867932 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.867940 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:41.867946 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:41.868008 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:41.895614 1019854 cri.go:89] found id: ""
	I1119 22:49:41.895637 1019854 logs.go:282] 0 containers: []
	W1119 22:49:41.895646 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:41.895654 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:41.895666 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:42.025527 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:42.025570 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:42.046313 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:42.046396 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:42.122229 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:42.122309 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:42.122343 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:42.164158 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:42.164213 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:44.712362 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:44.722904 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:44.722977 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:44.749177 1019854 cri.go:89] found id: ""
	I1119 22:49:44.749203 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.749213 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:44.749224 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:44.749285 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:44.775904 1019854 cri.go:89] found id: ""
	I1119 22:49:44.775967 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.775982 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:44.775990 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:44.776051 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:44.802018 1019854 cri.go:89] found id: ""
	I1119 22:49:44.802045 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.802054 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:44.802069 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:44.802164 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:44.827730 1019854 cri.go:89] found id: ""
	I1119 22:49:44.827755 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.827763 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:44.827770 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:44.827828 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:44.852953 1019854 cri.go:89] found id: ""
	I1119 22:49:44.852980 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.853001 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:44.853010 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:44.853088 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:44.879011 1019854 cri.go:89] found id: ""
	I1119 22:49:44.879080 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.879096 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:44.879104 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:44.879169 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:44.905266 1019854 cri.go:89] found id: ""
	I1119 22:49:44.905291 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.905300 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:44.905307 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:44.905363 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:44.933040 1019854 cri.go:89] found id: ""
	I1119 22:49:44.933067 1019854 logs.go:282] 0 containers: []
	W1119 22:49:44.933077 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:44.933086 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:44.933118 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:45.051676 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:45.051720 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:45.113602 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:45.113643 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:45.233125 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:45.233158 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:45.233174 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:45.293121 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:45.293234 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1119 22:49:44.420795 1032597 pod_ready.go:104] pod "coredns-66bc5c9577-snvrx" is not "Ready", error: <nil>
	I1119 22:49:45.422169 1032597 pod_ready.go:94] pod "coredns-66bc5c9577-snvrx" is "Ready"
	I1119 22:49:45.422199 1032597 pod_ready.go:86] duration metric: took 5.506515642s for pod "coredns-66bc5c9577-snvrx" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:45.425024 1032597 pod_ready.go:83] waiting for pod "etcd-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:49:47.430047 1032597 pod_ready.go:104] pod "etcd-pause-743639" is not "Ready", error: <nil>
	I1119 22:49:47.828207 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:47.838423 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:47.838492 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:47.864605 1019854 cri.go:89] found id: ""
	I1119 22:49:47.864631 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.864640 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:47.864647 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:47.864704 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:47.892588 1019854 cri.go:89] found id: ""
	I1119 22:49:47.892614 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.892624 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:47.892631 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:47.892689 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:47.922453 1019854 cri.go:89] found id: ""
	I1119 22:49:47.922481 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.922490 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:47.922496 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:47.922558 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:47.955943 1019854 cri.go:89] found id: ""
	I1119 22:49:47.955967 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.955976 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:47.955983 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:47.956047 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:47.981115 1019854 cri.go:89] found id: ""
	I1119 22:49:47.981139 1019854 logs.go:282] 0 containers: []
	W1119 22:49:47.981148 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:47.981154 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:47.981212 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:48.010915 1019854 cri.go:89] found id: ""
	I1119 22:49:48.011002 1019854 logs.go:282] 0 containers: []
	W1119 22:49:48.011027 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:48.011070 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:48.011185 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:48.040637 1019854 cri.go:89] found id: ""
	I1119 22:49:48.040662 1019854 logs.go:282] 0 containers: []
	W1119 22:49:48.040670 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:48.040677 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:48.040745 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:48.068988 1019854 cri.go:89] found id: ""
	I1119 22:49:48.069012 1019854 logs.go:282] 0 containers: []
	W1119 22:49:48.069021 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:48.069031 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:48.069042 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:48.087406 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:48.087439 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:48.159586 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:48.159606 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:48.159621 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:48.196781 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:48.196818 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:48.226499 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:48.226530 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1119 22:49:49.430582 1032597 pod_ready.go:104] pod "etcd-pause-743639" is not "Ready", error: <nil>
	I1119 22:49:49.931420 1032597 pod_ready.go:94] pod "etcd-pause-743639" is "Ready"
	I1119 22:49:49.931450 1032597 pod_ready.go:86] duration metric: took 4.506395557s for pod "etcd-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.934090 1032597 pod_ready.go:83] waiting for pod "kube-apiserver-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.938735 1032597 pod_ready.go:94] pod "kube-apiserver-pause-743639" is "Ready"
	I1119 22:49:49.938763 1032597 pod_ready.go:86] duration metric: took 4.645171ms for pod "kube-apiserver-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.941198 1032597 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.945856 1032597 pod_ready.go:94] pod "kube-controller-manager-pause-743639" is "Ready"
	I1119 22:49:49.945890 1032597 pod_ready.go:86] duration metric: took 4.652776ms for pod "kube-controller-manager-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:49.948615 1032597 pod_ready.go:83] waiting for pod "kube-proxy-jgn2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.129213 1032597 pod_ready.go:94] pod "kube-proxy-jgn2m" is "Ready"
	I1119 22:49:50.129238 1032597 pod_ready.go:86] duration metric: took 180.593627ms for pod "kube-proxy-jgn2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.329697 1032597 pod_ready.go:83] waiting for pod "kube-scheduler-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.729618 1032597 pod_ready.go:94] pod "kube-scheduler-pause-743639" is "Ready"
	I1119 22:49:50.729648 1032597 pod_ready.go:86] duration metric: took 399.919343ms for pod "kube-scheduler-pause-743639" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:49:50.729662 1032597 pod_ready.go:40] duration metric: took 10.887073278s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:49:50.780815 1032597 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:49:50.784029 1032597 out.go:179] * Done! kubectl is now configured to use "pause-743639" cluster and "default" namespace by default
	I1119 22:49:50.850405 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:50.864494 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:50.864566 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:50.905770 1019854 cri.go:89] found id: ""
	I1119 22:49:50.905800 1019854 logs.go:282] 0 containers: []
	W1119 22:49:50.905808 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:50.905815 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:50.905874 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:50.951241 1019854 cri.go:89] found id: ""
	I1119 22:49:50.951262 1019854 logs.go:282] 0 containers: []
	W1119 22:49:50.951271 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:50.951278 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:50.951338 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:50.985738 1019854 cri.go:89] found id: ""
	I1119 22:49:50.985759 1019854 logs.go:282] 0 containers: []
	W1119 22:49:50.985768 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:50.985774 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:50.985834 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:51.028487 1019854 cri.go:89] found id: ""
	I1119 22:49:51.028508 1019854 logs.go:282] 0 containers: []
	W1119 22:49:51.028517 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:51.028529 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:51.028590 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:51.073259 1019854 cri.go:89] found id: ""
	I1119 22:49:51.073287 1019854 logs.go:282] 0 containers: []
	W1119 22:49:51.073297 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:51.073304 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:51.073364 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:51.113115 1019854 cri.go:89] found id: ""
	I1119 22:49:51.113144 1019854 logs.go:282] 0 containers: []
	W1119 22:49:51.113152 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:51.113161 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:51.113221 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:51.142006 1019854 cri.go:89] found id: ""
	I1119 22:49:51.142033 1019854 logs.go:282] 0 containers: []
	W1119 22:49:51.142042 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:51.142048 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:51.142113 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:51.182630 1019854 cri.go:89] found id: ""
	I1119 22:49:51.182654 1019854 logs.go:282] 0 containers: []
	W1119 22:49:51.182662 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:51.182671 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:51.182684 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:51.231372 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:51.234498 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:51.279132 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:51.279200 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:49:51.410806 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:51.410897 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:51.427732 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:51.427762 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:51.512669 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:54.013621 1019854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:49:54.026859 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:49:54.027000 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:49:54.066987 1019854 cri.go:89] found id: ""
	I1119 22:49:54.067013 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.067022 1019854 logs.go:284] No container was found matching "kube-apiserver"
	I1119 22:49:54.067029 1019854 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:49:54.067093 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:49:54.108177 1019854 cri.go:89] found id: ""
	I1119 22:49:54.108212 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.108221 1019854 logs.go:284] No container was found matching "etcd"
	I1119 22:49:54.108228 1019854 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:49:54.108291 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:49:54.138156 1019854 cri.go:89] found id: ""
	I1119 22:49:54.138178 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.138187 1019854 logs.go:284] No container was found matching "coredns"
	I1119 22:49:54.138193 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:49:54.138249 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:49:54.177127 1019854 cri.go:89] found id: ""
	I1119 22:49:54.177155 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.177164 1019854 logs.go:284] No container was found matching "kube-scheduler"
	I1119 22:49:54.177172 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:49:54.177234 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:49:54.212712 1019854 cri.go:89] found id: ""
	I1119 22:49:54.212740 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.212749 1019854 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:49:54.212756 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:49:54.212814 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:49:54.243286 1019854 cri.go:89] found id: ""
	I1119 22:49:54.243312 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.243321 1019854 logs.go:284] No container was found matching "kube-controller-manager"
	I1119 22:49:54.243327 1019854 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:49:54.243384 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:49:54.274149 1019854 cri.go:89] found id: ""
	I1119 22:49:54.274175 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.274186 1019854 logs.go:284] No container was found matching "kindnet"
	I1119 22:49:54.274192 1019854 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:49:54.274255 1019854 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:49:54.301601 1019854 cri.go:89] found id: ""
	I1119 22:49:54.301627 1019854 logs.go:282] 0 containers: []
	W1119 22:49:54.301636 1019854 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:49:54.301645 1019854 logs.go:123] Gathering logs for dmesg ...
	I1119 22:49:54.301656 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:49:54.319932 1019854 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:49:54.319966 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:49:54.417859 1019854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:49:54.417879 1019854 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:49:54.417892 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:49:54.459463 1019854 logs.go:123] Gathering logs for container status ...
	I1119 22:49:54.459497 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:49:54.498794 1019854 logs.go:123] Gathering logs for kubelet ...
	I1119 22:49:54.498817 1019854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.234483221Z" level=info msg="Starting container: f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27" id=fa67e669-5e22-4ca8-8cb1-babf20b1eb78 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.238461258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.245142282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.275545444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.27565555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.285283937Z" level=info msg="Started container" PID=2332 containerID=ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd description=kube-system/coredns-66bc5c9577-snvrx/coredns id=4eaaa63f-15b3-4a26-b958-227fa57e7d19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a66d74ba6fb86098b801952e1c1254a565784b97003aab8dc79acb3a7fba3d8
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.286190663Z" level=info msg="Started container" PID=2312 containerID=f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27 description=kube-system/kube-proxy-jgn2m/kube-proxy id=fa67e669-5e22-4ca8-8cb1-babf20b1eb78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=089669e1b30c9b098f031b40c18d40f1ad0bef4c14aa54ef15f3d8e989f03bf1
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.349708088Z" level=info msg="Created container f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec: kube-system/kube-apiserver-pause-743639/kube-apiserver" id=df59cb0e-9054-411d-91fe-d418ff9e56eb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.353230654Z" level=info msg="Starting container: f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec" id=f6ac5b2c-5be7-4181-b601-47645c0f8187 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.355406817Z" level=info msg="Started container" PID=2357 containerID=f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec description=kube-system/kube-apiserver-pause-743639/kube-apiserver id=f6ac5b2c-5be7-4181-b601-47645c0f8187 name=/runtime.v1.RuntimeService/StartContainer sandboxID=40163f2081742cd55d2ea3a53ae6841efbd943d899b170bb6703d485973bb119
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.355787654Z" level=info msg="Created container 16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8: kube-system/kube-scheduler-pause-743639/kube-scheduler" id=9f9e2f65-7020-4a4d-ba2e-29f1f84406ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.359166349Z" level=info msg="Starting container: 16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8" id=d0a74123-211f-4554-84a3-d9df8e90e489 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:49:33 pause-743639 crio[2070]: time="2025-11-19T22:49:33.362488912Z" level=info msg="Started container" PID=2356 containerID=16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8 description=kube-system/kube-scheduler-pause-743639/kube-scheduler id=d0a74123-211f-4554-84a3-d9df8e90e489 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c52e799b5c75a34bb69320597a2de0215f10bd99fa1c3834af5d3cbc55bb230
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.556505805Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.561011634Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.561047926Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.561077424Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.565254684Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.565290483Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.565316132Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.569519468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.569554792Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.569579777Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.573594475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:49:43 pause-743639 crio[2070]: time="2025-11-19T22:49:43.573628855Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f8d6acd5f2bf9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   40163f2081742       kube-apiserver-pause-743639            kube-system
	16516a6a1f73d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   5c52e799b5c75       kube-scheduler-pause-743639            kube-system
	ab284bf12b2e5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   9a66d74ba6fb8       coredns-66bc5c9577-snvrx               kube-system
	f7a24f16a1401       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   089669e1b30c9       kube-proxy-jgn2m                       kube-system
	11c8fd6c6f971       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   ba46f803358b0       kindnet-9dzb9                          kube-system
	41ef975cdc862       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      1                   84e9cd17c2fef       etcd-pause-743639                      kube-system
	6a370518913d3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   37f021fe5295f       kube-controller-manager-pause-743639   kube-system
	a1d839c4dd761       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   9a66d74ba6fb8       coredns-66bc5c9577-snvrx               kube-system
	4f23bd39b1b7b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   ba46f803358b0       kindnet-9dzb9                          kube-system
	071f23bf7702e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   089669e1b30c9       kube-proxy-jgn2m                       kube-system
	3971476aca248       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5c52e799b5c75       kube-scheduler-pause-743639            kube-system
	b2d0c49923463       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   40163f2081742       kube-apiserver-pause-743639            kube-system
	68a763f87346d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   84e9cd17c2fef       etcd-pause-743639                      kube-system
	1b88acdd51935       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   37f021fe5295f       kube-controller-manager-pause-743639   kube-system
	
	
	==> coredns [a1d839c4dd761c3832961fa26ddbc6aeb0489bf5f7809ec819b48181e9a4fc16] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53986 - 57958 "HINFO IN 800128124084991565.8302540441610945696. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015927599s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ab284bf12b2e55a2b5c30ca3f8b39ff269ce8f2046ba5ba2e57d07abf53778fd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45051 - 60938 "HINFO IN 4120834022136133682.5908664279437778434. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009566619s
	
	
	==> describe nodes <==
	Name:               pause-743639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-743639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=pause-743639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_48_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:48:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-743639
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:49:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:48:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:48:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:48:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:49:20 +0000   Wed, 19 Nov 2025 22:49:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-743639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                c25ea14b-509d-4342-a6f3-0f68227de082
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-snvrx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-743639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-9dzb9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-743639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-743639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-jgn2m                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-743639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 76s   kube-proxy       
	  Normal   Starting                 17s   kube-proxy       
	  Normal   Starting                 83s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s   kubelet          Node pause-743639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s   kubelet          Node pause-743639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s   kubelet          Node pause-743639 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s   node-controller  Node pause-743639 event: Registered Node pause-743639 in Controller
	  Normal   NodeReady                36s   kubelet          Node pause-743639 status is now: NodeReady
	  Normal   RegisteredNode           15s   node-controller  Node pause-743639 event: Registered Node pause-743639 in Controller
	
	
	==> dmesg <==
	[ +33.914297] overlayfs: idmapped layers are currently not supported
	[Nov19 22:22] overlayfs: idmapped layers are currently not supported
	[Nov19 22:23] overlayfs: idmapped layers are currently not supported
	[  +3.200978] overlayfs: idmapped layers are currently not supported
	[Nov19 22:24] overlayfs: idmapped layers are currently not supported
	[ +20.253339] overlayfs: idmapped layers are currently not supported
	[Nov19 22:26] overlayfs: idmapped layers are currently not supported
	[Nov19 22:31] overlayfs: idmapped layers are currently not supported
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [41ef975cdc8623c00414594a8d3d94a614a8ef3fc161f854937339028de9efcb] <==
	{"level":"warn","ts":"2025-11-19T22:49:36.490583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.499054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.532092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.563295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.610675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.653640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.683441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.712891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.731481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.771437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.787832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.817211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.848323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.879385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.910110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.940223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.977977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:36.997497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.023134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.058102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.094932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.136059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.154139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.176425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:49:37.294544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	
	
	==> etcd [68a763f87346de7fced7d468257393247211fbec3248491d35175f813cedf14c] <==
	{"level":"warn","ts":"2025-11-19T22:48:29.493524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.513589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.532799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.554891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.572755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.591133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:48:29.665475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51644","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:49:25.048127Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-19T22:49:25.048194Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-743639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-19T22:49:25.048302Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T22:49:25.050740Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T22:49:25.198100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T22:49:25.198164Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-19T22:49:25.198231Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-19T22:49:25.198249Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198295Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198367Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T22:49:25.198401Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198486Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T22:49:25.198507Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T22:49:25.198525Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T22:49:25.201429Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-19T22:49:25.201507Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T22:49:25.201563Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T22:49:25.201594Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-743639","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 22:49:56 up  4:32,  0 user,  load average: 1.89, 2.47, 2.13
	Linux pause-743639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11c8fd6c6f9713d038f953487476bba6ffaa917653a267e0712298f3cab327c5] <==
	I1119 22:49:33.321129       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:49:33.338500       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:49:33.338647       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:49:33.338660       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:49:33.338674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:49:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:49:33.553370       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:49:33.553537       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:49:33.553571       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:49:33.556861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:49:38.256573       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:49:38.256633       1 metrics.go:72] Registering metrics
	I1119 22:49:38.256700       1 controller.go:711] "Syncing nftables rules"
	I1119 22:49:43.556030       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:49:43.556161       1 main.go:301] handling current node
	I1119 22:49:53.554970       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:49:53.555001       1 main.go:301] handling current node
	
	
	==> kindnet [4f23bd39b1b7b5a9e2324bcf5dc7394750a2e3300c282cc346f9fe063693c579] <==
	I1119 22:48:39.727695       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:48:39.728041       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:48:39.728216       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:48:39.728259       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:48:39.728295       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:48:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:48:39.932297       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:48:39.932371       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:48:39.932410       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:48:39.932784       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:49:09.932211       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:49:09.932363       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:49:09.933108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:49:09.933240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:49:11.332603       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:49:11.332645       1 metrics.go:72] Registering metrics
	I1119 22:49:11.332727       1 controller.go:711] "Syncing nftables rules"
	I1119 22:49:19.934945       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:49:19.935006       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924] <==
	W1119 22:49:25.065856       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.065912       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.065957       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.066411       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.066460       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.066500       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.067614       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.068308       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069162       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069195       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069241       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069281       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069437       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069749       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069780       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069803       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069937       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.069998       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070042       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070225       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070287       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070292       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070319       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070321       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 22:49:25.070344       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f8d6acd5f2bf9705c03356dedb341e73cbe39204de10035e1a6ed9107a3d73ec] <==
	I1119 22:49:38.143626       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:49:38.143650       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:49:38.143657       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:49:38.143664       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:49:38.152269       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:49:38.152583       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 22:49:38.153944       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 22:49:38.154061       1 policy_source.go:240] refreshing policies
	I1119 22:49:38.168930       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:49:38.171092       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:49:38.173386       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:49:38.177043       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:49:38.179799       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:49:38.180599       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 22:49:38.180626       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 22:49:38.180719       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1119 22:49:38.201079       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:49:38.203791       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:49:38.222917       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:49:38.899587       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:49:40.137943       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:49:41.518263       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:49:41.769179       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:49:41.819554       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:49:41.934106       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [1b88acdd5193545bf45d9dbee41f2a40a0ea8fc645842332f6efc20238afc997] <==
	I1119 22:48:37.658979       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-743639" podCIDRs=["10.244.0.0/24"]
	I1119 22:48:37.663833       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:48:37.664155       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:48:37.664203       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:48:37.667856       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:48:37.672916       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:48:37.674107       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:48:37.674799       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:48:37.675040       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:48:37.675657       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:48:37.675731       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:48:37.676315       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:48:37.676635       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:48:37.676693       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:48:37.676774       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:48:37.676841       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-743639"
	I1119 22:48:37.676877       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:48:37.676912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:48:37.676955       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:48:37.676649       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 22:48:37.687513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:48:37.688513       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:48:37.697917       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:48:38.935362       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1119 22:49:22.687143       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [6a370518913d3e600d8195c94e859b6072467940e2e7a356b0513e7dd8dfa80f] <==
	I1119 22:49:41.519336       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:49:41.519367       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:49:41.519407       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:49:41.519523       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:49:41.520951       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:49:41.525850       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:49:41.525970       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:49:41.528183       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:49:41.531293       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:49:41.534942       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:49:41.536095       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:49:41.560667       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:49:41.561941       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:49:41.562012       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:49:41.562034       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:49:41.562130       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-743639"
	I1119 22:49:41.562235       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:49:41.562305       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:49:41.562378       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:49:41.562447       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:49:41.562495       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:49:41.562531       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:49:41.563754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:49:41.563811       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:49:41.563841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	
	
	==> kube-proxy [071f23bf7702e74d792529400c719dce663750e2ca40b7dfd1c27b2a0ce3c622] <==
	I1119 22:48:39.630054       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:48:39.728472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:48:39.833680       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:48:39.833808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:48:39.833906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:48:39.852011       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:48:39.852071       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:48:39.856306       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:48:39.856657       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:48:39.856691       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:48:39.858047       1 config.go:200] "Starting service config controller"
	I1119 22:48:39.858067       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:48:39.858090       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:48:39.858094       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:48:39.858107       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:48:39.858111       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:48:39.858730       1 config.go:309] "Starting node config controller"
	I1119 22:48:39.858749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:48:39.858757       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:48:39.959080       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:48:39.961171       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:48:39.958955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f7a24f16a1401e2f1b8a994072d15763b365ddede6abda90483a797110f9ee27] <==
	I1119 22:49:35.354257       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:49:37.363852       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:49:38.323389       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:49:38.323942       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:49:38.324082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:49:38.361703       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:49:38.361801       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:49:38.368159       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:49:38.368499       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:49:38.368521       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:49:38.370791       1 config.go:200] "Starting service config controller"
	I1119 22:49:38.370951       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:49:38.373452       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:49:38.374801       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:49:38.374123       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:49:38.374947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:49:38.374518       1 config.go:309] "Starting node config controller"
	I1119 22:49:38.375034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:49:38.375062       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:49:38.471466       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:49:38.475756       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:49:38.475767       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16516a6a1f73d8ad5d309254573771a103af226e3a9cfd6220ff445d48221df8] <==
	I1119 22:49:37.050785       1 serving.go:386] Generated self-signed cert in-memory
	W1119 22:49:37.922778       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 22:49:37.922806       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:49:37.922817       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 22:49:37.922824       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 22:49:38.128703       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:49:38.128822       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:49:38.139018       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:38.139124       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:38.139837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:49:38.139910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:49:38.239732       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb] <==
	E1119 22:48:30.795949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:48:30.795962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:48:30.796000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:48:30.796042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:48:30.796154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:48:30.796230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:48:30.796287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:48:30.796361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:48:30.796394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:48:30.796434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:48:31.627298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:48:31.649600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:48:31.721067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:48:31.776955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:48:31.808617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:48:31.839300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:48:31.889331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:48:31.935256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1119 22:48:34.673526       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:25.048875       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1119 22:49:25.048971       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1119 22:49:25.048982       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1119 22:49:25.048999       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:49:25.049158       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1119 22:49:25.049173       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 19 22:49:33 pause-743639 kubelet[1314]: I1119 22:49:33.118551    1314 scope.go:117] "RemoveContainer" containerID="3971476aca2489fbea401b4c2421e6053fba1b5a4ea6bd945b7a4ab1c8e577eb"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120214    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d013a4bd471877f784cd773ff63a572d" pod="kube-system/kube-scheduler-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120506    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d47521cb7b698fc949ff87e3718c9f3c" pod="kube-system/etcd-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120723    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b544df01855519c2eb101ce142a4d90b" pod="kube-system/kube-apiserver-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.120905    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6db6c11330d02f32f392522f8c4e329a" pod="kube-system/kube-controller-manager-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.121072    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgn2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d654ae0c-812e-4423-82c1-860a834c4e1a" pod="kube-system/kube-proxy-jgn2m"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.121279    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9dzb9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9eefb432-a68a-4f03-8e51-b3137d193739" pod="kube-system/kindnet-9dzb9"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.121438    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-snvrx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb" pod="kube-system/coredns-66bc5c9577-snvrx"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: I1119 22:49:33.160743    1314 scope.go:117] "RemoveContainer" containerID="b2d0c49923463dd09930d58bc0ab7bc8eb82e5b6f7780ef1163e7c738ede7924"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.161324    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b544df01855519c2eb101ce142a4d90b" pod="kube-system/kube-apiserver-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163262    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6db6c11330d02f32f392522f8c4e329a" pod="kube-system/kube-controller-manager-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163456    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgn2m\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d654ae0c-812e-4423-82c1-860a834c4e1a" pod="kube-system/kube-proxy-jgn2m"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163620    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9dzb9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9eefb432-a68a-4f03-8e51-b3137d193739" pod="kube-system/kindnet-9dzb9"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163780    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-snvrx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb" pod="kube-system/coredns-66bc5c9577-snvrx"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.163936    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d013a4bd471877f784cd773ff63a572d" pod="kube-system/kube-scheduler-pause-743639"
	Nov 19 22:49:33 pause-743639 kubelet[1314]: E1119 22:49:33.164093    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-743639\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d47521cb7b698fc949ff87e3718c9f3c" pod="kube-system/etcd-pause-743639"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.073702    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-snvrx\" is forbidden: User \"system:node:pause-743639\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" podUID="ac6ee0de-2507-4a85-bba9-d3bcc9eec6fb" pod="kube-system/coredns-66bc5c9577-snvrx"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.074708    1314 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-743639\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.074949    1314 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-743639\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.079052    1314 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-743639\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 19 22:49:38 pause-743639 kubelet[1314]: E1119 22:49:38.102178    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-743639\" is forbidden: User \"system:node:pause-743639\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-743639' and this object" podUID="d013a4bd471877f784cd773ff63a572d" pod="kube-system/kube-scheduler-pause-743639"
	Nov 19 22:49:44 pause-743639 kubelet[1314]: W1119 22:49:44.057065    1314 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 19 22:49:51 pause-743639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:49:51 pause-743639 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:49:51 pause-743639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-743639 -n pause-743639
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-743639 -n pause-743639: exit status 2 (402.673695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-743639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (304.197056ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:56:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-191961 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-191961 describe deploy/metrics-server -n kube-system: exit status 1 (91.571438ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-191961 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-191961
helpers_test.go:243: (dbg) docker inspect old-k8s-version-191961:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee",
	        "Created": "2025-11-19T22:55:13.430692279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1051037,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:55:13.508456762Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/hosts",
	        "LogPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee-json.log",
	        "Name": "/old-k8s-version-191961",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-191961:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-191961",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee",
	                "LowerDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-191961",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-191961/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-191961",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-191961",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-191961",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5546ed644b86f18cd636e9e8b45836eb0d8e0d58bdf5276a5556691ddeb9071a",
	            "SandboxKey": "/var/run/docker/netns/5546ed644b86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-191961": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:a1:73:24:6f:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47f03f83c3fe719b80c42f4da32b57adc2e9e8ee352f6eea7c164878ce0bc301",
	                    "EndpointID": "e7349ca2cc1ecb788eaf789554040da162bb1da4f22c24d1e39366a1a7ee8609",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-191961",
	                        "e6ae989c9f99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-191961 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-191961 logs -n 25: (1.283520469s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-334366 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo containerd config dump                                                                                                                                                                                                  │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo crio config                                                                                                                                                                                                             │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ delete  │ -p cilium-334366                                                                                                                                                                                                                              │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:50 UTC │
	│ start   │ -p force-systemd-env-860026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:54 UTC │
	│ delete  │ -p force-systemd-env-860026                                                                                                                                                                                                                   │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:52 UTC │
	│ delete  │ -p kubernetes-upgrade-154655                                                                                                                                                                                                                  │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:54 UTC │
	│ start   │ -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ cert-options-110863 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ -p cert-options-110863 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ delete  │ -p cert-options-110863                                                                                                                                                                                                                        │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:55:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:55:29.867770 1053743 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:55:29.868015 1053743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:55:29.868044 1053743 out.go:374] Setting ErrFile to fd 2...
	I1119 22:55:29.868073 1053743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:55:29.868393 1053743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:55:29.868889 1053743 out.go:368] Setting JSON to false
	I1119 22:55:29.869997 1053743 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16659,"bootTime":1763576271,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:55:29.870109 1053743 start.go:143] virtualization:  
	I1119 22:55:29.882971 1053743 out.go:179] * [no-preload-018508] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:55:29.886464 1053743 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:55:29.886535 1053743 notify.go:221] Checking for updates...
	I1119 22:55:29.893014 1053743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:55:29.896288 1053743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:55:29.899427 1053743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:55:29.902520 1053743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:55:29.905424 1053743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:55:29.908792 1053743 config.go:182] Loaded profile config "old-k8s-version-191961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:55:29.908891 1053743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:55:29.983321 1053743 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:55:29.983474 1053743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:55:30.135909 1053743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:55:30.116634947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:55:30.136021 1053743 docker.go:319] overlay module found
	I1119 22:55:30.139407 1053743 out.go:179] * Using the docker driver based on user configuration
	I1119 22:55:30.142319 1053743 start.go:309] selected driver: docker
	I1119 22:55:30.142341 1053743 start.go:930] validating driver "docker" against <nil>
	I1119 22:55:30.142356 1053743 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:55:30.143165 1053743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:55:30.219018 1053743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:55:30.209472409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:55:30.219186 1053743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:55:30.219411 1053743 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:55:30.222350 1053743 out.go:179] * Using Docker driver with root privileges
	I1119 22:55:30.225423 1053743 cni.go:84] Creating CNI manager for ""
	I1119 22:55:30.225487 1053743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:55:30.225500 1053743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:55:30.225579 1053743 start.go:353] cluster config:
	{Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:55:30.230557 1053743 out.go:179] * Starting "no-preload-018508" primary control-plane node in "no-preload-018508" cluster
	I1119 22:55:30.233336 1053743 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:55:30.236397 1053743 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:55:30.239366 1053743 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:55:30.239502 1053743 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/config.json ...
	I1119 22:55:30.239545 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/config.json: {Name:mk441607f97e825d84acf9997131ade689daa385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:30.239744 1053743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:55:30.239939 1053743 cache.go:107] acquiring lock: {Name:mk180e474f04af563cbfcf1e6f1ac0d968064e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.240001 1053743 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 22:55:30.240010 1053743 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 81.904µs
	I1119 22:55:30.240018 1053743 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 22:55:30.240029 1053743 cache.go:107] acquiring lock: {Name:mk2b339ab9bb06155cf46e99d17bcad78cd42ce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.240100 1053743 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:30.240285 1053743 cache.go:107] acquiring lock: {Name:mkeb8164ef4491f0dac349eed28d827e1ab20310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.240352 1053743 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:30.240437 1053743 cache.go:107] acquiring lock: {Name:mk60a33fac3c62a01332ec72da7be7d237eebaf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.240499 1053743 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:30.240570 1053743 cache.go:107] acquiring lock: {Name:mkf092cd9edaf9fd2c691350815b05d694be6ba4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.240625 1053743 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:30.240719 1053743 cache.go:107] acquiring lock: {Name:mk202931c6624db071b2edd07b2fea5bfea95f34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.240781 1053743 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:55:30.240874 1053743 cache.go:107] acquiring lock: {Name:mk29595f21458d904bc2d24173d38f20affcf328 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.240934 1053743 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:30.241011 1053743 cache.go:107] acquiring lock: {Name:mk5ba73f1f86578edab04675b64317e89203f7a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.241085 1053743 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:30.244832 1053743 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:30.245460 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:30.245685 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:30.245886 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:30.246093 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:30.246300 1053743 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:55:30.246505 1053743 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:30.271653 1053743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:55:30.271673 1053743 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:55:30.271685 1053743 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:55:30.271709 1053743 start.go:360] acquireMachinesLock for no-preload-018508: {Name:mk5707a3ba7045dab1a444980a59ede7567f2c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:55:30.271801 1053743 start.go:364] duration metric: took 76.587µs to acquireMachinesLock for "no-preload-018508"
	I1119 22:55:30.271826 1053743 start.go:93] Provisioning new machine with config: &{Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:55:30.271890 1053743 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:55:27.807554 1050506 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:55:28.353876 1050506 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:55:28.716762 1050506 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:55:29.624992 1050506 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:55:29.943263 1050506 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:55:29.943414 1050506 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191961] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:55:30.227755 1050506 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:55:30.228171 1050506 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191961] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:55:32.290111 1050506 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:55:30.295182 1053743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:55:30.295482 1053743 start.go:159] libmachine.API.Create for "no-preload-018508" (driver="docker")
	I1119 22:55:30.296353 1053743 client.go:173] LocalClient.Create starting
	I1119 22:55:30.296476 1053743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 22:55:30.296531 1053743 main.go:143] libmachine: Decoding PEM data...
	I1119 22:55:30.296581 1053743 main.go:143] libmachine: Parsing certificate...
	I1119 22:55:30.296675 1053743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 22:55:30.296729 1053743 main.go:143] libmachine: Decoding PEM data...
	I1119 22:55:30.296764 1053743 main.go:143] libmachine: Parsing certificate...
	I1119 22:55:30.297442 1053743 cli_runner.go:164] Run: docker network inspect no-preload-018508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:55:30.318054 1053743 cli_runner.go:211] docker network inspect no-preload-018508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:55:30.318134 1053743 network_create.go:284] running [docker network inspect no-preload-018508] to gather additional debugging logs...
	I1119 22:55:30.318157 1053743 cli_runner.go:164] Run: docker network inspect no-preload-018508
	W1119 22:55:30.344397 1053743 cli_runner.go:211] docker network inspect no-preload-018508 returned with exit code 1
	I1119 22:55:30.344424 1053743 network_create.go:287] error running [docker network inspect no-preload-018508]: docker network inspect no-preload-018508: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-018508 not found
	I1119 22:55:30.344438 1053743 network_create.go:289] output of [docker network inspect no-preload-018508]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-018508 not found
	
	** /stderr **
	I1119 22:55:30.344540 1053743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:55:30.373333 1053743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
	I1119 22:55:30.373676 1053743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-409f9deb7199 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:64:cf:3b:93:91} reservation:<nil>}
	I1119 22:55:30.373987 1053743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-601de6a5616d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:01:2f:20:8b:a3} reservation:<nil>}
	I1119 22:55:30.374233 1053743 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-47f03f83c3fe IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:39:05:43:10:b3} reservation:<nil>}
	I1119 22:55:30.374655 1053743 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c53370}
	I1119 22:55:30.374680 1053743 network_create.go:124] attempt to create docker network no-preload-018508 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:55:30.374743 1053743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-018508 no-preload-018508
	I1119 22:55:30.485102 1053743 network_create.go:108] docker network no-preload-018508 192.168.85.0/24 created
	I1119 22:55:30.485132 1053743 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-018508" container
	I1119 22:55:30.485208 1053743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:55:30.503499 1053743 cli_runner.go:164] Run: docker volume create no-preload-018508 --label name.minikube.sigs.k8s.io=no-preload-018508 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:55:30.528026 1053743 oci.go:103] Successfully created a docker volume no-preload-018508
	I1119 22:55:30.528110 1053743 cli_runner.go:164] Run: docker run --rm --name no-preload-018508-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-018508 --entrypoint /usr/bin/test -v no-preload-018508:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:55:30.615715 1053743 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:55:30.636115 1053743 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:55:30.636849 1053743 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:55:30.666104 1053743 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1119 22:55:30.683401 1053743 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:55:30.698926 1053743 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:55:30.704615 1053743 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:55:30.719147 1053743 cache.go:157] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1119 22:55:30.719223 1053743 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 478.504521ms
	I1119 22:55:30.719250 1053743 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 22:55:31.271126 1053743 oci.go:107] Successfully prepared a docker volume no-preload-018508
	I1119 22:55:31.271169 1053743 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1119 22:55:31.271314 1053743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:55:31.271435 1053743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:55:31.412105 1053743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-018508 --name no-preload-018508 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-018508 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-018508 --network no-preload-018508 --ip 192.168.85.2 --volume no-preload-018508:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:55:31.428945 1053743 cache.go:157] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 22:55:31.429127 1053743 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.188554596s
	I1119 22:55:31.429144 1053743 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 22:55:31.599723 1053743 cache.go:157] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 22:55:31.599806 1053743 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.359368654s
	I1119 22:55:31.599855 1053743 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 22:55:31.724612 1053743 cache.go:157] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 22:55:31.724644 1053743 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.484360511s
	I1119 22:55:31.724656 1053743 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 22:55:31.757928 1053743 cache.go:157] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 22:55:31.758058 1053743 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.518000576s
	I1119 22:55:31.758091 1053743 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 22:55:31.921316 1053743 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Running}}
	I1119 22:55:31.975093 1053743 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:55:31.980934 1053743 cache.go:157] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 22:55:31.981015 1053743 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.740002107s
	I1119 22:55:31.981046 1053743 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 22:55:32.008753 1053743 cli_runner.go:164] Run: docker exec no-preload-018508 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:55:32.097896 1053743 oci.go:144] the created container "no-preload-018508" has a running status.
	I1119 22:55:32.097926 1053743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa...
	I1119 22:55:32.313907 1053743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:55:32.368850 1053743 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:55:32.395449 1053743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:55:32.395477 1053743 kic_runner.go:114] Args: [docker exec --privileged no-preload-018508 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:55:32.467768 1053743 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:55:32.501368 1053743 machine.go:94] provisionDockerMachine start ...
	I1119 22:55:32.501468 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:32.533834 1053743 main.go:143] libmachine: Using SSH client type: native
	I1119 22:55:32.534189 1053743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33846 <nil> <nil>}
	I1119 22:55:32.534208 1053743 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:55:32.535126 1053743 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:55:33.724310 1053743 cache.go:157] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 22:55:33.724388 1053743 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 3.483512953s
	I1119 22:55:33.724415 1053743 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 22:55:33.724460 1053743 cache.go:87] Successfully saved all images to host disk.
	I1119 22:55:32.976575 1050506 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:55:33.851579 1050506 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:55:33.851655 1050506 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:55:34.259789 1050506 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:55:34.667267 1050506 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:55:35.207051 1050506 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:55:36.074706 1050506 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:55:36.074807 1050506 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:55:36.079018 1050506 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:55:36.082987 1050506 out.go:252]   - Booting up control plane ...
	I1119 22:55:36.083102 1050506 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:55:36.083191 1050506 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:55:36.083263 1050506 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:55:36.106593 1050506 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:55:36.107665 1050506 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:55:36.107724 1050506 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:55:36.283426 1050506 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1119 22:55:35.694833 1053743 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-018508
	
	I1119 22:55:35.694861 1053743 ubuntu.go:182] provisioning hostname "no-preload-018508"
	I1119 22:55:35.694953 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:35.716723 1053743 main.go:143] libmachine: Using SSH client type: native
	I1119 22:55:35.717083 1053743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33846 <nil> <nil>}
	I1119 22:55:35.717103 1053743 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-018508 && echo "no-preload-018508" | sudo tee /etc/hostname
	I1119 22:55:35.886797 1053743 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-018508
	
	I1119 22:55:35.886938 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:35.908387 1053743 main.go:143] libmachine: Using SSH client type: native
	I1119 22:55:35.908712 1053743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33846 <nil> <nil>}
	I1119 22:55:35.908734 1053743 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018508/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:55:36.059733 1053743 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:55:36.059757 1053743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:55:36.059782 1053743 ubuntu.go:190] setting up certificates
	I1119 22:55:36.059802 1053743 provision.go:84] configureAuth start
	I1119 22:55:36.059865 1053743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:55:36.088754 1053743 provision.go:143] copyHostCerts
	I1119 22:55:36.088822 1053743 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:55:36.088830 1053743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:55:36.088906 1053743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:55:36.089001 1053743 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:55:36.089007 1053743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:55:36.089033 1053743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:55:36.089083 1053743 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:55:36.089088 1053743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:55:36.089110 1053743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:55:36.089155 1053743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.no-preload-018508 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-018508]
	I1119 22:55:36.474096 1053743 provision.go:177] copyRemoteCerts
	I1119 22:55:36.474167 1053743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:55:36.474211 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:36.504645 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:36.607683 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:55:36.627749 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:55:36.647928 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:55:36.667874 1053743 provision.go:87] duration metric: took 608.049893ms to configureAuth
	I1119 22:55:36.667951 1053743 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:55:36.668204 1053743 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:55:36.668362 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:36.688799 1053743 main.go:143] libmachine: Using SSH client type: native
	I1119 22:55:36.689215 1053743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33846 <nil> <nil>}
	I1119 22:55:36.689239 1053743 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:55:37.043702 1053743 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:55:37.043731 1053743 machine.go:97] duration metric: took 4.542341252s to provisionDockerMachine
	I1119 22:55:37.043740 1053743 client.go:176] duration metric: took 6.747344053s to LocalClient.Create
	I1119 22:55:37.043763 1053743 start.go:167] duration metric: took 6.748283513s to libmachine.API.Create "no-preload-018508"
	I1119 22:55:37.043774 1053743 start.go:293] postStartSetup for "no-preload-018508" (driver="docker")
	I1119 22:55:37.043800 1053743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:55:37.043876 1053743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:55:37.043966 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:37.069996 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:37.176164 1053743 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:55:37.180068 1053743 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:55:37.180102 1053743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:55:37.180115 1053743 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:55:37.180173 1053743 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:55:37.180255 1053743 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:55:37.180362 1053743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:55:37.188502 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:55:37.209055 1053743 start.go:296] duration metric: took 165.265253ms for postStartSetup
	I1119 22:55:37.209488 1053743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:55:37.229530 1053743 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/config.json ...
	I1119 22:55:37.229812 1053743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:55:37.229864 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:37.249607 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:37.348287 1053743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:55:37.353446 1053743 start.go:128] duration metric: took 7.081541487s to createHost
	I1119 22:55:37.353472 1053743 start.go:83] releasing machines lock for "no-preload-018508", held for 7.081662365s
	I1119 22:55:37.353541 1053743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:55:37.376751 1053743 ssh_runner.go:195] Run: cat /version.json
	I1119 22:55:37.376804 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:37.377030 1053743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:55:37.377090 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:37.409774 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:37.419451 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:37.617172 1053743 ssh_runner.go:195] Run: systemctl --version
	I1119 22:55:37.623848 1053743 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:55:37.661594 1053743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:55:37.665914 1053743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:55:37.665990 1053743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:55:37.694890 1053743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:55:37.694915 1053743 start.go:496] detecting cgroup driver to use...
	I1119 22:55:37.694947 1053743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:55:37.694995 1053743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:55:37.713049 1053743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:55:37.726137 1053743 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:55:37.726201 1053743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:55:37.744485 1053743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:55:37.772599 1053743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:55:37.963379 1053743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:55:38.181933 1053743 docker.go:234] disabling docker service ...
	I1119 22:55:38.182013 1053743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:55:38.225294 1053743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:55:38.246273 1053743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:55:38.473862 1053743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:55:38.696729 1053743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:55:38.710317 1053743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:55:38.746724 1053743 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:55:38.746804 1053743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:55:38.760586 1053743 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:55:38.760657 1053743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:55:38.777597 1053743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:55:38.793826 1053743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:55:38.809284 1053743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:55:38.821177 1053743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:55:38.834435 1053743 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:55:38.860001 1053743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:55:38.869614 1053743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:55:38.879120 1053743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:55:38.887889 1053743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:55:39.117876 1053743 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:55:39.357880 1053743 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:55:39.357953 1053743 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:55:39.363453 1053743 start.go:564] Will wait 60s for crictl version
	I1119 22:55:39.363520 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:39.369862 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:55:39.462017 1053743 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:55:39.462113 1053743 ssh_runner.go:195] Run: crio --version
	I1119 22:55:39.493251 1053743 ssh_runner.go:195] Run: crio --version
	I1119 22:55:39.535038 1053743 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:55:39.537828 1053743 cli_runner.go:164] Run: docker network inspect no-preload-018508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:55:39.563451 1053743 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:55:39.567325 1053743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:55:39.591372 1053743 kubeadm.go:884] updating cluster {Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:55:39.591493 1053743 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:55:39.591542 1053743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:55:39.642733 1053743 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:55:39.642755 1053743 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1119 22:55:39.642792 1053743 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:55:39.643026 1053743 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:39.643134 1053743 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:39.643223 1053743 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:39.643320 1053743 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:39.643416 1053743 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:55:39.643508 1053743 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:39.643622 1053743 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:39.644589 1053743 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:39.644821 1053743 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:55:39.645143 1053743 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:55:39.645398 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:39.645561 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:39.645713 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:39.645857 1053743 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:39.646474 1053743 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:39.896718 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:39.903217 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:39.916716 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1119 22:55:39.918230 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:39.927969 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:39.944831 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:39.954069 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:40.044031 1053743 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1119 22:55:40.044124 1053743 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:40.044212 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:40.153710 1053743 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1119 22:55:40.153799 1053743 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:40.153879 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:40.264902 1053743 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1119 22:55:40.264995 1053743 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1119 22:55:40.265080 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:40.265224 1053743 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1119 22:55:40.265272 1053743 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:40.265318 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:40.265440 1053743 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1119 22:55:40.265478 1053743 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:40.265532 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:40.265638 1053743 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1119 22:55:40.265685 1053743 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:40.265724 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:40.277558 1053743 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1119 22:55:40.277647 1053743 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:40.277730 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:40.277850 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:40.277967 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:40.303041 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:40.303194 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:55:40.303302 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:40.303390 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:40.396601 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:40.397287 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:40.559525 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:40.559617 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:40.559653 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:55:40.559704 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:40.559818 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:40.559842 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:55:40.560059 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:55:40.787067 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:55:40.787239 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:55:40.787359 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:55:40.787453 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:40.787565 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:55:40.787669 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:55:40.787779 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:55:40.787907 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:55:40.788002 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:55:40.957670 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1119 22:55:40.957710 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1119 22:55:40.957764 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1119 22:55:40.957781 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1119 22:55:40.957834 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:55:40.957919 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:55:40.957986 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:55:40.958033 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1119 22:55:40.958083 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1119 22:55:40.958130 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:55:40.958175 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:55:40.958224 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:55:40.958270 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	W1119 22:55:40.975650 1053743 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1119 22:55:40.975835 1053743 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:55:41.067691 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1119 22:55:41.067734 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1119 22:55:41.067791 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1119 22:55:41.067805 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1119 22:55:41.067848 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1119 22:55:41.067863 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1119 22:55:41.067901 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1119 22:55:41.067915 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1119 22:55:41.100423 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:55:41.100552 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	W1119 22:55:41.122450 1053743 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1119 22:55:41.122510 1053743 retry.go:31] will retry after 303.970696ms: ssh: rejected: connect failed (open failed)
	W1119 22:55:41.123127 1053743 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1119 22:55:41.123153 1053743 retry.go:31] will retry after 361.249264ms: ssh: rejected: connect failed (open failed)
	I1119 22:55:41.285798 1053743 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1119 22:55:41.285877 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1119 22:55:41.285958 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:41.320991 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:41.323037 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1119 22:55:41.323071 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1119 22:55:41.323127 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:41.323468 1053743 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1119 22:55:41.323492 1053743 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:55:41.323537 1053743 ssh_runner.go:195] Run: which crictl
	I1119 22:55:41.323600 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:55:41.421209 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:41.426969 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:55:42.312711 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:55:42.312796 1053743 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.10.1: (1.026898035s)
	I1119 22:55:42.313091 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1119 22:55:42.313132 1053743 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:55:42.313209 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:55:48.286720 1050506 kubeadm.go:319] [apiclient] All control plane components are healthy after 12.005502 seconds
	I1119 22:55:48.286850 1050506 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:55:48.314107 1050506 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:55:48.865795 1050506 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:55:48.866034 1050506 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-191961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:55:49.380652 1050506 kubeadm.go:319] [bootstrap-token] Using token: gq7hdx.tg96h4omazvnwt90
	I1119 22:55:45.330838 1053743 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (3.017583744s)
	I1119 22:55:45.330933 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 22:55:45.330967 1053743 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:55:45.331047 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:55:45.331092 1053743 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.018102131s)
	I1119 22:55:45.331206 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:55:47.378468 1053743 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.047228213s)
	I1119 22:55:47.378544 1053743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:55:47.378617 1053743 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.047499222s)
	I1119 22:55:47.378628 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 22:55:47.378645 1053743 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:55:47.378668 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:55:48.922275 1053743 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.543710551s)
	I1119 22:55:48.922325 1053743 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1119 22:55:48.922448 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:55:48.922509 1053743 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.543831281s)
	I1119 22:55:48.922524 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 22:55:48.922542 1053743 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:55:48.922574 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:55:49.383529 1050506 out.go:252]   - Configuring RBAC rules ...
	I1119 22:55:49.383655 1050506 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:55:49.390041 1050506 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:55:49.412243 1050506 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:55:49.416819 1050506 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:55:49.421361 1050506 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:55:49.428221 1050506 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:55:49.445176 1050506 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:55:49.880582 1050506 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:55:50.101816 1050506 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:55:50.101836 1050506 kubeadm.go:319] 
	I1119 22:55:50.101899 1050506 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:55:50.101904 1050506 kubeadm.go:319] 
	I1119 22:55:50.101985 1050506 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:55:50.101995 1050506 kubeadm.go:319] 
	I1119 22:55:50.102021 1050506 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:55:50.102083 1050506 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:55:50.102136 1050506 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:55:50.102140 1050506 kubeadm.go:319] 
	I1119 22:55:50.102196 1050506 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:55:50.102200 1050506 kubeadm.go:319] 
	I1119 22:55:50.102250 1050506 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:55:50.102254 1050506 kubeadm.go:319] 
	I1119 22:55:50.102308 1050506 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:55:50.102387 1050506 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:55:50.102458 1050506 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:55:50.102463 1050506 kubeadm.go:319] 
	I1119 22:55:50.102551 1050506 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:55:50.102631 1050506 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:55:50.102636 1050506 kubeadm.go:319] 
	I1119 22:55:50.102723 1050506 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gq7hdx.tg96h4omazvnwt90 \
	I1119 22:55:50.102831 1050506 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 22:55:50.102852 1050506 kubeadm.go:319] 	--control-plane 
	I1119 22:55:50.102859 1050506 kubeadm.go:319] 
	I1119 22:55:50.102971 1050506 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:55:50.102977 1050506 kubeadm.go:319] 
	I1119 22:55:50.103062 1050506 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gq7hdx.tg96h4omazvnwt90 \
	I1119 22:55:50.103169 1050506 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 22:55:50.131689 1050506 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:55:50.131842 1050506 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:55:50.131874 1050506 cni.go:84] Creating CNI manager for ""
	I1119 22:55:50.131887 1050506 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:55:50.135181 1050506 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:55:50.138030 1050506 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:55:50.161442 1050506 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 22:55:50.161469 1050506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:55:50.228597 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:55:51.517419 1050506 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.288789053s)
	I1119 22:55:51.517456 1050506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:55:51.517585 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:51.517659 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-191961 minikube.k8s.io/updated_at=2025_11_19T22_55_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=old-k8s-version-191961 minikube.k8s.io/primary=true
	I1119 22:55:51.860436 1050506 ops.go:34] apiserver oom_adj: -16
	I1119 22:55:51.860559 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:53.331929 1053743 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.409328999s)
	I1119 22:55:53.332020 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 22:55:53.332050 1053743 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:55:53.331954 1053743 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (4.409479056s)
	I1119 22:55:53.332090 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 22:55:53.332113 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1119 22:55:53.332132 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:55:52.361608 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:52.861249 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:53.360932 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:53.861108 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:54.361073 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:54.861037 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:55.361133 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:55.861399 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:56.361057 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:56.860741 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:55.189745 1053743 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.857588223s)
	I1119 22:55:55.189778 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 22:55:55.189799 1053743 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:55:55.189848 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:55:56.467886 1053743 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.278003644s)
	I1119 22:55:56.467963 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 22:55:56.467997 1053743 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:55:56.468088 1053743 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:55:57.077785 1053743 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 22:55:57.077829 1053743 cache_images.go:125] Successfully loaded all cached images
	I1119 22:55:57.077836 1053743 cache_images.go:94] duration metric: took 17.435067193s to LoadCachedImages
	I1119 22:55:57.077848 1053743 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:55:57.077944 1053743 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-018508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:55:57.078055 1053743 ssh_runner.go:195] Run: crio config
	I1119 22:55:57.133539 1053743 cni.go:84] Creating CNI manager for ""
	I1119 22:55:57.133616 1053743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:55:57.133654 1053743 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:55:57.133710 1053743 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018508 NodeName:no-preload-018508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:55:57.133872 1053743 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018508"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:55:57.133990 1053743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:55:57.142058 1053743 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 22:55:57.142149 1053743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 22:55:57.150528 1053743 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1119 22:55:57.150647 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 22:55:57.151726 1053743 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1119 22:55:57.151728 1053743 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1119 22:55:57.155804 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 22:55:57.155844 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1119 22:55:58.056264 1053743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:55:58.071161 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 22:55:58.076163 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 22:55:58.076205 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1119 22:55:58.083157 1053743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 22:55:58.098317 1053743 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 22:55:58.098356 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1119 22:55:58.843803 1053743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:55:58.852983 1053743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 22:55:58.878789 1053743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:55:58.896054 1053743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:55:58.917609 1053743 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:55:58.922196 1053743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:55:58.936375 1053743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:55:59.082855 1053743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:55:59.101702 1053743 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508 for IP: 192.168.85.2
	I1119 22:55:59.101742 1053743 certs.go:195] generating shared ca certs ...
	I1119 22:55:59.101775 1053743 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:59.101977 1053743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 22:55:59.102041 1053743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 22:55:59.102062 1053743 certs.go:257] generating profile certs ...
	I1119 22:55:59.102150 1053743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.key
	I1119 22:55:59.102173 1053743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt with IP's: []
	I1119 22:55:59.317072 1053743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt ...
	I1119 22:55:59.317147 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: {Name:mkbf0773dd87f0c429017182cda4c79fd65e8fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:59.317388 1053743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.key ...
	I1119 22:55:59.317423 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.key: {Name:mkbbd58529a35869482e78807b86bcd36a6b0937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:59.317575 1053743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key.7c4af07e
	I1119 22:55:59.317614 1053743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt.7c4af07e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:55:59.373136 1053743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt.7c4af07e ...
	I1119 22:55:59.373197 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt.7c4af07e: {Name:mk4f974c0725400886f3d74f1a1a813d5652289d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:59.373405 1053743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key.7c4af07e ...
	I1119 22:55:59.373447 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key.7c4af07e: {Name:mkd368267ac0f529ec5a85976dd4b4bbfbcc6b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:59.373589 1053743 certs.go:382] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt.7c4af07e -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt
	I1119 22:55:59.373715 1053743 certs.go:386] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key.7c4af07e -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key
	I1119 22:55:59.373819 1053743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key
	I1119 22:55:59.373858 1053743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.crt with IP's: []
	I1119 22:55:59.544820 1053743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.crt ...
	I1119 22:55:59.544890 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.crt: {Name:mk8f0e7c8df90387a65379052fddfae6863a035a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:59.545087 1053743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key ...
	I1119 22:55:59.545128 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key: {Name:mk2b7fdf61ec7cfb4d810649e9388dcd263fc259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:55:59.545366 1053743 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 22:55:59.545433 1053743 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 22:55:59.545462 1053743 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:55:59.545524 1053743 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:55:59.545587 1053743 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:55:59.545637 1053743 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 22:55:59.545723 1053743 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:55:59.546313 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:55:59.568057 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:55:59.590415 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:55:59.610262 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 22:55:59.631393 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:55:59.654925 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:55:59.674368 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:55:59.694115 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:55:59.713632 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:55:59.733772 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 22:55:59.754715 1053743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 22:55:59.773901 1053743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:55:59.791760 1053743 ssh_runner.go:195] Run: openssl version
	I1119 22:55:59.799436 1053743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:55:59.809082 1053743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:55:59.815159 1053743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:55:59.815268 1053743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:55:59.875660 1053743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:55:59.886185 1053743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 22:55:59.894984 1053743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 22:55:59.903435 1053743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 22:55:59.903565 1053743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 22:55:59.956492 1053743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 22:55:59.969232 1053743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 22:55:59.985373 1053743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 22:55:59.989803 1053743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 22:55:59.989880 1053743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 22:56:00.034613 1053743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:56:00.062856 1053743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:56:00.068650 1053743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:56:00.068771 1053743 kubeadm.go:401] StartCluster: {Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:56:00.068959 1053743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:56:00.069068 1053743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:56:00.179868 1053743 cri.go:89] found id: ""
	I1119 22:56:00.180114 1053743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:56:00.217674 1053743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:56:00.275887 1053743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:56:00.275967 1053743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:56:00.295065 1053743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:56:00.295163 1053743 kubeadm.go:158] found existing configuration files:
	
	I1119 22:56:00.295273 1053743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:56:00.322274 1053743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:56:00.322430 1053743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:56:00.346102 1053743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:56:00.397720 1053743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:56:00.397915 1053743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:56:00.425224 1053743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:56:00.443916 1053743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:56:00.443990 1053743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:56:00.470056 1053743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:56:00.504324 1053743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:56:00.504399 1053743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:56:00.557243 1053743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:56:00.639967 1053743 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:56:00.640213 1053743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:56:00.674910 1053743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:56:00.674985 1053743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:56:00.675023 1053743 kubeadm.go:319] OS: Linux
	I1119 22:56:00.675071 1053743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:56:00.675121 1053743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:56:00.675171 1053743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:56:00.675222 1053743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:56:00.675272 1053743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:56:00.675322 1053743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:56:00.675369 1053743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:56:00.675426 1053743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:56:00.675485 1053743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:56:00.752637 1053743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:56:00.752771 1053743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:56:00.752871 1053743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:56:00.768645 1053743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:55:57.361360 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:57.860690 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:58.361016 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:58.861251 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:59.361068 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:55:59.861483 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:00.361541 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:00.861187 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:01.361274 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:01.860808 1050506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:02.071006 1050506 kubeadm.go:1114] duration metric: took 10.553480302s to wait for elevateKubeSystemPrivileges
	I1119 22:56:02.071040 1050506 kubeadm.go:403] duration metric: took 36.621562585s to StartCluster
	I1119 22:56:02.071058 1050506 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:56:02.071123 1050506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:56:02.071803 1050506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:56:02.072024 1050506 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:56:02.072148 1050506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:56:02.072404 1050506 config.go:182] Loaded profile config "old-k8s-version-191961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:56:02.072446 1050506 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:56:02.072510 1050506 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-191961"
	I1119 22:56:02.072525 1050506 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-191961"
	I1119 22:56:02.072553 1050506 host.go:66] Checking if "old-k8s-version-191961" exists ...
	I1119 22:56:02.073047 1050506 cli_runner.go:164] Run: docker container inspect old-k8s-version-191961 --format={{.State.Status}}
	I1119 22:56:02.073475 1050506 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-191961"
	I1119 22:56:02.073502 1050506 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-191961"
	I1119 22:56:02.073780 1050506 cli_runner.go:164] Run: docker container inspect old-k8s-version-191961 --format={{.State.Status}}
	I1119 22:56:02.075869 1050506 out.go:179] * Verifying Kubernetes components...
	I1119 22:56:02.091058 1050506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:56:02.119183 1050506 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:56:02.120679 1050506 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-191961"
	I1119 22:56:02.120718 1050506 host.go:66] Checking if "old-k8s-version-191961" exists ...
	I1119 22:56:02.121119 1050506 cli_runner.go:164] Run: docker container inspect old-k8s-version-191961 --format={{.State.Status}}
	I1119 22:56:02.123914 1050506 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:56:02.123941 1050506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:56:02.124007 1050506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-191961
	I1119 22:56:02.154080 1050506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33841 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/old-k8s-version-191961/id_rsa Username:docker}
	I1119 22:56:02.158322 1050506 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:56:02.158341 1050506 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:56:02.158407 1050506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-191961
	I1119 22:56:02.196851 1050506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33841 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/old-k8s-version-191961/id_rsa Username:docker}
	I1119 22:56:02.663343 1050506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:56:02.670381 1050506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:56:02.760754 1050506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:56:02.760867 1050506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:56:03.825120 1050506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161739374s)
	I1119 22:56:04.416314 1050506 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.655415162s)
	I1119 22:56:04.417040 1050506 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-191961" to be "Ready" ...
	I1119 22:56:04.417264 1050506 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.656483165s)
	I1119 22:56:04.417288 1050506 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:56:04.418227 1050506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.7478169s)
	I1119 22:56:04.421389 1050506 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:56:00.771125 1053743 out.go:252]   - Generating certificates and keys ...
	I1119 22:56:00.771276 1053743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:56:00.771376 1053743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:56:01.076313 1053743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:56:01.407284 1053743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:56:03.134871 1053743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:56:04.334427 1053743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:56:04.605281 1053743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:56:04.606416 1053743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-018508] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:56:04.424307 1050506 addons.go:515] duration metric: took 2.351841806s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:56:04.927839 1050506 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-191961" context rescaled to 1 replicas
	W1119 22:56:06.420208 1050506 node_ready.go:57] node "old-k8s-version-191961" has "Ready":"False" status (will retry)
	I1119 22:56:05.117983 1053743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:56:05.120909 1053743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-018508] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:56:05.932746 1053743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:56:06.046861 1053743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:56:06.708233 1053743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:56:06.708813 1053743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:56:07.201461 1053743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:56:08.126891 1053743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:56:08.383650 1053743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:56:08.585052 1053743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:56:08.853907 1053743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:56:08.854734 1053743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:56:08.857602 1053743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:56:08.861116 1053743 out.go:252]   - Booting up control plane ...
	I1119 22:56:08.861249 1053743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:56:08.861422 1053743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:56:08.861516 1053743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:56:08.879386 1053743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:56:08.879774 1053743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:56:08.893560 1053743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:56:08.893894 1053743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:56:08.893943 1053743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:56:09.058268 1053743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:56:09.058394 1053743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1119 22:56:08.421269 1050506 node_ready.go:57] node "old-k8s-version-191961" has "Ready":"False" status (will retry)
	W1119 22:56:10.920637 1050506 node_ready.go:57] node "old-k8s-version-191961" has "Ready":"False" status (will retry)
	I1119 22:56:10.059828 1053743 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001799585s
	I1119 22:56:10.063547 1053743 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:56:10.063656 1053743 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:56:10.063763 1053743 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:56:10.063849 1053743 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:56:13.744328 1053743 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.680120208s
	I1119 22:56:15.694590 1053743 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.631106481s
	I1119 22:56:16.568043 1053743 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.504168949s
	I1119 22:56:16.594733 1053743 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:56:16.608860 1053743 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:56:16.624067 1053743 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:56:16.624305 1053743 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-018508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:56:16.637907 1053743 kubeadm.go:319] [bootstrap-token] Using token: fyuny6.tezt9x0c4xn9ax0i
	W1119 22:56:12.920935 1050506 node_ready.go:57] node "old-k8s-version-191961" has "Ready":"False" status (will retry)
	W1119 22:56:14.921021 1050506 node_ready.go:57] node "old-k8s-version-191961" has "Ready":"False" status (will retry)
	I1119 22:56:16.640885 1053743 out.go:252]   - Configuring RBAC rules ...
	I1119 22:56:16.641018 1053743 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:56:16.647494 1053743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:56:16.658167 1053743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:56:16.663449 1053743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:56:16.668200 1053743 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:56:16.673929 1053743 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:56:16.975394 1053743 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:56:17.436542 1053743 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:56:17.978255 1053743 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:56:17.978281 1053743 kubeadm.go:319] 
	I1119 22:56:17.978347 1053743 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:56:17.978357 1053743 kubeadm.go:319] 
	I1119 22:56:17.978442 1053743 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:56:17.978452 1053743 kubeadm.go:319] 
	I1119 22:56:17.978478 1053743 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:56:17.978544 1053743 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:56:17.978601 1053743 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:56:17.978610 1053743 kubeadm.go:319] 
	I1119 22:56:17.978676 1053743 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:56:17.978686 1053743 kubeadm.go:319] 
	I1119 22:56:17.978735 1053743 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:56:17.978745 1053743 kubeadm.go:319] 
	I1119 22:56:17.978800 1053743 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:56:17.978909 1053743 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:56:17.978987 1053743 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:56:17.978997 1053743 kubeadm.go:319] 
	I1119 22:56:17.979085 1053743 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:56:17.979169 1053743 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:56:17.979181 1053743 kubeadm.go:319] 
	I1119 22:56:17.979276 1053743 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fyuny6.tezt9x0c4xn9ax0i \
	I1119 22:56:17.979388 1053743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 22:56:17.979415 1053743 kubeadm.go:319] 	--control-plane 
	I1119 22:56:17.979432 1053743 kubeadm.go:319] 
	I1119 22:56:17.979521 1053743 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:56:17.979528 1053743 kubeadm.go:319] 
	I1119 22:56:17.979613 1053743 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fyuny6.tezt9x0c4xn9ax0i \
	I1119 22:56:17.979721 1053743 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 22:56:17.984177 1053743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:56:17.984414 1053743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:56:17.984530 1053743 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:56:17.984553 1053743 cni.go:84] Creating CNI manager for ""
	I1119 22:56:17.984561 1053743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:56:17.987820 1053743 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:56:17.990835 1053743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:56:17.996199 1053743 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:56:17.996225 1053743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:56:18.017831 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:56:18.353480 1053743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:56:18.353618 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:18.353702 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-018508 minikube.k8s.io/updated_at=2025_11_19T22_56_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=no-preload-018508 minikube.k8s.io/primary=true
	I1119 22:56:18.505568 1053743 ops.go:34] apiserver oom_adj: -16
	I1119 22:56:18.505627 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:19.008600 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:19.506394 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 22:56:17.420576 1050506 node_ready.go:57] node "old-k8s-version-191961" has "Ready":"False" status (will retry)
	I1119 22:56:18.922242 1050506 node_ready.go:49] node "old-k8s-version-191961" is "Ready"
	I1119 22:56:18.922282 1050506 node_ready.go:38] duration metric: took 14.505213537s for node "old-k8s-version-191961" to be "Ready" ...
	I1119 22:56:18.922297 1050506 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:56:18.922386 1050506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:56:18.935521 1050506 api_server.go:72] duration metric: took 16.863442777s to wait for apiserver process to appear ...
	I1119 22:56:18.935547 1050506 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:56:18.935579 1050506 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:56:18.945356 1050506 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:56:18.946818 1050506 api_server.go:141] control plane version: v1.28.0
	I1119 22:56:18.946848 1050506 api_server.go:131] duration metric: took 11.292928ms to wait for apiserver health ...
	I1119 22:56:18.946858 1050506 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:56:18.950619 1050506 system_pods.go:59] 8 kube-system pods found
	I1119 22:56:18.950657 1050506 system_pods.go:61] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:56:18.950664 1050506 system_pods.go:61] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running
	I1119 22:56:18.950670 1050506 system_pods.go:61] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:56:18.950675 1050506 system_pods.go:61] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running
	I1119 22:56:18.950681 1050506 system_pods.go:61] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running
	I1119 22:56:18.950686 1050506 system_pods.go:61] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:56:18.950690 1050506 system_pods.go:61] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running
	I1119 22:56:18.950697 1050506 system_pods.go:61] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:56:18.950708 1050506 system_pods.go:74] duration metric: took 3.844417ms to wait for pod list to return data ...
	I1119 22:56:18.950723 1050506 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:56:18.953093 1050506 default_sa.go:45] found service account: "default"
	I1119 22:56:18.953118 1050506 default_sa.go:55] duration metric: took 2.388636ms for default service account to be created ...
	I1119 22:56:18.953128 1050506 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:56:18.956967 1050506 system_pods.go:86] 8 kube-system pods found
	I1119 22:56:18.957004 1050506 system_pods.go:89] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:56:18.957011 1050506 system_pods.go:89] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running
	I1119 22:56:18.957017 1050506 system_pods.go:89] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:56:18.957022 1050506 system_pods.go:89] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running
	I1119 22:56:18.957057 1050506 system_pods.go:89] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running
	I1119 22:56:18.957071 1050506 system_pods.go:89] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:56:18.957076 1050506 system_pods.go:89] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running
	I1119 22:56:18.957082 1050506 system_pods.go:89] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:56:18.957118 1050506 retry.go:31] will retry after 205.870345ms: missing components: kube-dns
	I1119 22:56:19.168910 1050506 system_pods.go:86] 8 kube-system pods found
	I1119 22:56:19.168944 1050506 system_pods.go:89] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:56:19.168952 1050506 system_pods.go:89] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running
	I1119 22:56:19.168958 1050506 system_pods.go:89] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:56:19.168963 1050506 system_pods.go:89] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running
	I1119 22:56:19.169004 1050506 system_pods.go:89] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running
	I1119 22:56:19.169009 1050506 system_pods.go:89] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:56:19.169013 1050506 system_pods.go:89] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running
	I1119 22:56:19.169019 1050506 system_pods.go:89] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:56:19.169042 1050506 retry.go:31] will retry after 278.495255ms: missing components: kube-dns
	I1119 22:56:19.452237 1050506 system_pods.go:86] 8 kube-system pods found
	I1119 22:56:19.452273 1050506 system_pods.go:89] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:56:19.452280 1050506 system_pods.go:89] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running
	I1119 22:56:19.452287 1050506 system_pods.go:89] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:56:19.452292 1050506 system_pods.go:89] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running
	I1119 22:56:19.452341 1050506 system_pods.go:89] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running
	I1119 22:56:19.452346 1050506 system_pods.go:89] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:56:19.452350 1050506 system_pods.go:89] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running
	I1119 22:56:19.452356 1050506 system_pods.go:89] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:56:19.452393 1050506 retry.go:31] will retry after 417.316956ms: missing components: kube-dns
	I1119 22:56:19.873738 1050506 system_pods.go:86] 8 kube-system pods found
	I1119 22:56:19.873773 1050506 system_pods.go:89] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Running
	I1119 22:56:19.873780 1050506 system_pods.go:89] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running
	I1119 22:56:19.873785 1050506 system_pods.go:89] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:56:19.873789 1050506 system_pods.go:89] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running
	I1119 22:56:19.873796 1050506 system_pods.go:89] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running
	I1119 22:56:19.873800 1050506 system_pods.go:89] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:56:19.873804 1050506 system_pods.go:89] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running
	I1119 22:56:19.873809 1050506 system_pods.go:89] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Running
	I1119 22:56:19.873816 1050506 system_pods.go:126] duration metric: took 920.681886ms to wait for k8s-apps to be running ...
	I1119 22:56:19.873831 1050506 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:56:19.873890 1050506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:56:19.887376 1050506 system_svc.go:56] duration metric: took 13.535979ms WaitForService to wait for kubelet
	I1119 22:56:19.887470 1050506 kubeadm.go:587] duration metric: took 17.815411796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:56:19.887497 1050506 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:56:19.890316 1050506 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:56:19.890349 1050506 node_conditions.go:123] node cpu capacity is 2
	I1119 22:56:19.890363 1050506 node_conditions.go:105] duration metric: took 2.859663ms to run NodePressure ...
	I1119 22:56:19.890376 1050506 start.go:242] waiting for startup goroutines ...
	I1119 22:56:19.890383 1050506 start.go:247] waiting for cluster config update ...
	I1119 22:56:19.890394 1050506 start.go:256] writing updated cluster config ...
	I1119 22:56:19.890702 1050506 ssh_runner.go:195] Run: rm -f paused
	I1119 22:56:19.895827 1050506 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:56:19.900160 1050506 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-sf6gl" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:19.905413 1050506 pod_ready.go:94] pod "coredns-5dd5756b68-sf6gl" is "Ready"
	I1119 22:56:19.905442 1050506 pod_ready.go:86] duration metric: took 5.253763ms for pod "coredns-5dd5756b68-sf6gl" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:19.908483 1050506 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:19.913263 1050506 pod_ready.go:94] pod "etcd-old-k8s-version-191961" is "Ready"
	I1119 22:56:19.913290 1050506 pod_ready.go:86] duration metric: took 4.78185ms for pod "etcd-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:19.916294 1050506 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:19.921185 1050506 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-191961" is "Ready"
	I1119 22:56:19.921212 1050506 pod_ready.go:86] duration metric: took 4.888912ms for pod "kube-apiserver-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:19.924559 1050506 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:20.300781 1050506 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-191961" is "Ready"
	I1119 22:56:20.300809 1050506 pod_ready.go:86] duration metric: took 376.228141ms for pod "kube-controller-manager-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:20.500692 1050506 pod_ready.go:83] waiting for pod "kube-proxy-rkdfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:20.900478 1050506 pod_ready.go:94] pod "kube-proxy-rkdfn" is "Ready"
	I1119 22:56:20.900547 1050506 pod_ready.go:86] duration metric: took 399.828961ms for pod "kube-proxy-rkdfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:21.100690 1050506 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:21.500361 1050506 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-191961" is "Ready"
	I1119 22:56:21.500435 1050506 pod_ready.go:86] duration metric: took 399.666042ms for pod "kube-scheduler-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:56:21.500463 1050506 pod_ready.go:40] duration metric: took 1.604602721s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:56:21.574313 1050506 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1119 22:56:21.577500 1050506 out.go:203] 
	W1119 22:56:21.580417 1050506 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:56:21.583403 1050506 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:56:21.587066 1050506 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-191961" cluster and "default" namespace by default
	I1119 22:56:20.006597 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:20.506411 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:21.006287 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:21.506250 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:22.006109 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:22.505733 1053743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:56:22.724015 1053743 kubeadm.go:1114] duration metric: took 4.370445944s to wait for elevateKubeSystemPrivileges
	I1119 22:56:22.724060 1053743 kubeadm.go:403] duration metric: took 22.655294247s to StartCluster
	I1119 22:56:22.724077 1053743 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:56:22.724141 1053743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:56:22.725099 1053743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:56:22.725307 1053743 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:56:22.725396 1053743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:56:22.725674 1053743 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:56:22.725710 1053743 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:56:22.725794 1053743 addons.go:70] Setting storage-provisioner=true in profile "no-preload-018508"
	I1119 22:56:22.725810 1053743 addons.go:239] Setting addon storage-provisioner=true in "no-preload-018508"
	I1119 22:56:22.725832 1053743 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:56:22.726321 1053743 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:56:22.726763 1053743 addons.go:70] Setting default-storageclass=true in profile "no-preload-018508"
	I1119 22:56:22.726781 1053743 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018508"
	I1119 22:56:22.727119 1053743 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:56:22.733517 1053743 out.go:179] * Verifying Kubernetes components...
	I1119 22:56:22.736701 1053743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:56:22.778431 1053743 addons.go:239] Setting addon default-storageclass=true in "no-preload-018508"
	I1119 22:56:22.778473 1053743 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:56:22.778956 1053743 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:56:22.779064 1053743 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:56:22.782122 1053743 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:56:22.782144 1053743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:56:22.782211 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:56:22.824550 1053743 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:56:22.824574 1053743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:56:22.824645 1053743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:56:22.829095 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:56:22.854435 1053743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33846 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:56:23.079559 1053743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:56:23.114287 1053743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:56:23.183596 1053743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:56:23.186216 1053743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:56:23.832117 1053743 node_ready.go:35] waiting up to 6m0s for node "no-preload-018508" to be "Ready" ...
	I1119 22:56:23.834053 1053743 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:56:24.342731 1053743 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-018508" context rescaled to 1 replicas
	I1119 22:56:24.508346 1053743 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.322057962s)
	I1119 22:56:24.513548 1053743 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:56:24.516803 1053743 addons.go:515] duration metric: took 1.791069227s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1119 22:56:25.836748 1053743 node_ready.go:57] node "no-preload-018508" has "Ready":"False" status (will retry)
	W1119 22:56:28.336850 1053743 node_ready.go:57] node "no-preload-018508" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 22:56:19 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:19.286165429Z" level=info msg="Created container 75923672e157984cdc67f71b8ef69e3bf130cfc62913c6db7df266f81b113791: kube-system/coredns-5dd5756b68-sf6gl/coredns" id=040fa163-1a99-413e-91d4-20a6dcefb8a3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:56:19 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:19.28719536Z" level=info msg="Starting container: 75923672e157984cdc67f71b8ef69e3bf130cfc62913c6db7df266f81b113791" id=266514f9-a925-42ca-bfe2-45886fdf939d name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:56:19 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:19.29024956Z" level=info msg="Started container" PID=1936 containerID=75923672e157984cdc67f71b8ef69e3bf130cfc62913c6db7df266f81b113791 description=kube-system/coredns-5dd5756b68-sf6gl/coredns id=266514f9-a925-42ca-bfe2-45886fdf939d name=/runtime.v1.RuntimeService/StartContainer sandboxID=ffd3e4c82d0d818bb626eecc1cd3b23d75b1c53726297211127e0ca63d7abc3e
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.136322924Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8a0c2e00-c571-427f-a48e-86862eb145e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.136402465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.150715943Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7b84e74ba51dcb01aab0ab73b74faeaa765c49bbc525194ff72dc0928c88f15e UID:d7049a7e-c4f1-41aa-b250-36991037c143 NetNS:/var/run/netns/5ab603b9-2e94-44ad-bb51-3ec938dc1091 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40001396e0}] Aliases:map[]}"
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.150759915Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.177267881Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7b84e74ba51dcb01aab0ab73b74faeaa765c49bbc525194ff72dc0928c88f15e UID:d7049a7e-c4f1-41aa-b250-36991037c143 NetNS:/var/run/netns/5ab603b9-2e94-44ad-bb51-3ec938dc1091 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40001396e0}] Aliases:map[]}"
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.177565935Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.189745794Z" level=info msg="Ran pod sandbox 7b84e74ba51dcb01aab0ab73b74faeaa765c49bbc525194ff72dc0928c88f15e with infra container: default/busybox/POD" id=8a0c2e00-c571-427f-a48e-86862eb145e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.19102051Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c792cc0c-6845-482c-a4b9-4acfc9e4a4c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.191300726Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c792cc0c-6845-482c-a4b9-4acfc9e4a4c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.191442831Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c792cc0c-6845-482c-a4b9-4acfc9e4a4c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.192300511Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8325fa6-1513-40ee-ab86-e0230ac0e27b name=/runtime.v1.ImageService/PullImage
	Nov 19 22:56:22 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:22.200661431Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.336048591Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b8325fa6-1513-40ee-ab86-e0230ac0e27b name=/runtime.v1.ImageService/PullImage
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.339142414Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2fe4fceb-216d-40a6-83e4-33e4868f27a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.341358159Z" level=info msg="Creating container: default/busybox/busybox" id=8b52b20c-2581-4604-add9-fad788883259 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.341626427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.353589366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.354236271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.381695982Z" level=info msg="Created container adeca018a9bb17efa243dcc1ebc030f1ca40934422fb1f795ec1e1ccd2a0ba4f: default/busybox/busybox" id=8b52b20c-2581-4604-add9-fad788883259 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.382425973Z" level=info msg="Starting container: adeca018a9bb17efa243dcc1ebc030f1ca40934422fb1f795ec1e1ccd2a0ba4f" id=a51efdcc-4096-4bd7-859b-973a214d4081 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:56:24 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:24.387386442Z" level=info msg="Started container" PID=1992 containerID=adeca018a9bb17efa243dcc1ebc030f1ca40934422fb1f795ec1e1ccd2a0ba4f description=default/busybox/busybox id=a51efdcc-4096-4bd7-859b-973a214d4081 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b84e74ba51dcb01aab0ab73b74faeaa765c49bbc525194ff72dc0928c88f15e
	Nov 19 22:56:30 old-k8s-version-191961 crio[841]: time="2025-11-19T22:56:30.083715558Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	adeca018a9bb1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   7b84e74ba51dc       busybox                                          default
	75923672e1579       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   ffd3e4c82d0d8       coredns-5dd5756b68-sf6gl                         kube-system
	0f99b05fe5691       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   8d45f63486d7e       storage-provisioner                              kube-system
	8a83ef1980040       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   71dbe920f226b       kindnet-dtpd4                                    kube-system
	39663a57d1041       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   823dd084ad1ee       kube-proxy-rkdfn                                 kube-system
	aa5ba9e0990cb       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      53 seconds ago      Running             kube-scheduler            0                   bd435fce565ef       kube-scheduler-old-k8s-version-191961            kube-system
	d5feec0a78017       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      53 seconds ago      Running             kube-apiserver            0                   b86b234446115       kube-apiserver-old-k8s-version-191961            kube-system
	261e6eab78f5e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      53 seconds ago      Running             kube-controller-manager   0                   a78a1dea8ddcb       kube-controller-manager-old-k8s-version-191961   kube-system
	14fceadfad514       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      53 seconds ago      Running             etcd                      0                   a3b8a9a22a454       etcd-old-k8s-version-191961                      kube-system
	
	
	==> coredns [75923672e157984cdc67f71b8ef69e3bf130cfc62913c6db7df266f81b113791] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50772 - 52273 "HINFO IN 7711331708915312403.9003289137475738899. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021267864s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-191961
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-191961
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-191961
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_55_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:55:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-191961
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:56:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:56:21 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:56:21 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:56:21 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:56:21 +0000   Wed, 19 Nov 2025 22:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-191961
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                a586ad19-3112-4f7e-a794-67583869230e
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-sf6gl                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-191961                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-dtpd4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-191961             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-191961    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-rkdfn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-191961             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-191961 event: Registered Node old-k8s-version-191961 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-191961 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 22:26] overlayfs: idmapped layers are currently not supported
	[Nov19 22:31] overlayfs: idmapped layers are currently not supported
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [14fceadfad51487ee35e9e3f0de18041dda21bfde189ac90e4f341af13c5ea37] <==
	{"level":"info","ts":"2025-11-19T22:55:38.17553Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:55:38.174249Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:55:38.175665Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:55:38.174305Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-19T22:55:38.176092Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:55:38.176289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T22:55:38.176455Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T22:55:38.90319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-19T22:55:38.903437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-19T22:55:38.903495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-19T22:55:38.903539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:55:38.903583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:55:38.903621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-19T22:55:38.903652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:55:38.906083Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-191961 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:55:38.906428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:55:38.90644Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:55:38.906471Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:55:38.91151Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:55:38.906485Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:55:38.913572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T22:55:38.920896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:55:38.922978Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:55:38.923047Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:55:38.927396Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 22:56:31 up  4:38,  0 user,  load average: 3.88, 2.74, 2.30
	Linux old-k8s-version-191961 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a83ef198004009e1d4550faa83c0ce1ab4885f25469ce54b658b3ef19c79e37] <==
	I1119 22:56:08.032636       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:56:08.033074       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:56:08.033245       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:56:08.033359       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:56:08.033406       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:56:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:56:08.225720       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:56:08.225798       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:56:08.225835       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:56:08.226472       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:56:08.426092       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:56:08.426211       1 metrics.go:72] Registering metrics
	I1119 22:56:08.426308       1 controller.go:711] "Syncing nftables rules"
	I1119 22:56:18.231469       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:56:18.231525       1 main.go:301] handling current node
	I1119 22:56:28.227623       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:56:28.227654       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d5feec0a780179450951dabca2eca5c421994d3a3d6a3f90aeb6bf2526db6e90] <==
	I1119 22:55:46.546649       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:55:46.546704       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:55:46.546752       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:55:46.564366       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:55:46.580105       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:55:46.616657       1 trace.go:236] Trace[812183330]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b5902040-148d-4c1a-af5a-11c9d7df242d,client:192.168.76.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (19-Nov-2025 22:55:46.102) (total time: 514ms):
	Trace[812183330]: ---"Write to database call failed" len:283,err:namespaces "default" not found 513ms (22:55:46.616)
	Trace[812183330]: [514.062425ms] [514.062425ms] END
	E1119 22:55:46.623175       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 22:55:46.831074       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:55:46.998781       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:55:47.012369       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:55:47.012930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:55:47.831760       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:55:47.903658       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:55:47.972793       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:55:47.982180       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:55:47.983620       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 22:55:47.991074       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:55:48.237385       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:55:49.841044       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:55:49.874487       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:55:49.892120       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 22:56:02.456614       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:56:02.657195       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [261e6eab78f5ecf2c438d7a37a2829e79aa314c76177a0a11799480c1a85428c] <==
	I1119 22:56:01.909029       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-191961" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:56:01.957680       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-191961" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1119 22:56:02.260555       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:56:02.288419       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:56:02.288456       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:56:02.468608       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 22:56:02.780268       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rkdfn"
	I1119 22:56:02.801241       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dtpd4"
	I1119 22:56:02.801352       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-f62v5"
	I1119 22:56:02.874208       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-sf6gl"
	I1119 22:56:02.927123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="458.027596ms"
	I1119 22:56:02.967301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.12308ms"
	I1119 22:56:03.084805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.443599ms"
	I1119 22:56:03.085112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="171.439µs"
	I1119 22:56:04.495548       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 22:56:04.569441       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-f62v5"
	I1119 22:56:04.588293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.462514ms"
	I1119 22:56:04.613749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.409905ms"
	I1119 22:56:04.659981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.159191ms"
	I1119 22:56:04.660104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.071µs"
	I1119 22:56:18.866573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.288µs"
	I1119 22:56:18.891169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.966µs"
	I1119 22:56:19.596153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.824853ms"
	I1119 22:56:19.596254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.455µs"
	I1119 22:56:21.834440       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [39663a57d104113177f8ed6e7e0dccba9872cf7f9cb74a61d96d9d1b250e8e51] <==
	I1119 22:56:04.225555       1 server_others.go:69] "Using iptables proxy"
	I1119 22:56:04.260665       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:56:04.402963       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:56:04.428024       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:56:04.428063       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:56:04.428072       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:56:04.428107       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:56:04.428332       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:56:04.428402       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:56:04.461767       1 config.go:188] "Starting service config controller"
	I1119 22:56:04.461794       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:56:04.461815       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:56:04.461819       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:56:04.486325       1 config.go:315] "Starting node config controller"
	I1119 22:56:04.486341       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:56:04.662976       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:56:04.663033       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:56:04.690912       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [aa5ba9e0990cb42de5a376a172b4427311dd2336e2508b83bda60c1caed7a098] <==
	W1119 22:55:46.548896       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1119 22:55:46.548934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1119 22:55:46.549021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1119 22:55:46.549069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1119 22:55:46.549155       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1119 22:55:46.549191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1119 22:55:46.555265       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 22:55:46.555361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1119 22:55:46.555512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 22:55:46.555565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:55:46.555652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 22:55:46.555702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 22:55:46.564867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:55:46.564961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1119 22:55:46.565064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1119 22:55:46.565106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:55:46.567161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1119 22:55:46.567244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1119 22:55:46.567290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 22:55:46.567368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 22:55:47.488816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1119 22:55:47.488976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1119 22:55:47.618068       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 22:55:47.618169       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1119 22:55:50.632695       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:56:02 old-k8s-version-191961 kubelet[1383]: I1119 22:56:02.885658    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89be932c-52ba-4d29-8aec-9dd84268d731-lib-modules\") pod \"kube-proxy-rkdfn\" (UID: \"89be932c-52ba-4d29-8aec-9dd84268d731\") " pod="kube-system/kube-proxy-rkdfn"
	Nov 19 22:56:02 old-k8s-version-191961 kubelet[1383]: I1119 22:56:02.885682    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89be932c-52ba-4d29-8aec-9dd84268d731-kube-proxy\") pod \"kube-proxy-rkdfn\" (UID: \"89be932c-52ba-4d29-8aec-9dd84268d731\") " pod="kube-system/kube-proxy-rkdfn"
	Nov 19 22:56:02 old-k8s-version-191961 kubelet[1383]: I1119 22:56:02.975562    1383 topology_manager.go:215] "Topology Admit Handler" podUID="e5d20ee8-59fe-46cb-889a-5fdeff81b3a4" podNamespace="kube-system" podName="kindnet-dtpd4"
	Nov 19 22:56:03 old-k8s-version-191961 kubelet[1383]: I1119 22:56:03.093429    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5d20ee8-59fe-46cb-889a-5fdeff81b3a4-xtables-lock\") pod \"kindnet-dtpd4\" (UID: \"e5d20ee8-59fe-46cb-889a-5fdeff81b3a4\") " pod="kube-system/kindnet-dtpd4"
	Nov 19 22:56:03 old-k8s-version-191961 kubelet[1383]: I1119 22:56:03.093587    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztfg4\" (UniqueName: \"kubernetes.io/projected/e5d20ee8-59fe-46cb-889a-5fdeff81b3a4-kube-api-access-ztfg4\") pod \"kindnet-dtpd4\" (UID: \"e5d20ee8-59fe-46cb-889a-5fdeff81b3a4\") " pod="kube-system/kindnet-dtpd4"
	Nov 19 22:56:03 old-k8s-version-191961 kubelet[1383]: I1119 22:56:03.093666    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e5d20ee8-59fe-46cb-889a-5fdeff81b3a4-cni-cfg\") pod \"kindnet-dtpd4\" (UID: \"e5d20ee8-59fe-46cb-889a-5fdeff81b3a4\") " pod="kube-system/kindnet-dtpd4"
	Nov 19 22:56:03 old-k8s-version-191961 kubelet[1383]: I1119 22:56:03.093753    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5d20ee8-59fe-46cb-889a-5fdeff81b3a4-lib-modules\") pod \"kindnet-dtpd4\" (UID: \"e5d20ee8-59fe-46cb-889a-5fdeff81b3a4\") " pod="kube-system/kindnet-dtpd4"
	Nov 19 22:56:03 old-k8s-version-191961 kubelet[1383]: W1119 22:56:03.934081    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-71dbe920f226b1625f6379fdefc6dae2aa60e4ceaf62ccfa927c86425fc81c15 WatchSource:0}: Error finding container 71dbe920f226b1625f6379fdefc6dae2aa60e4ceaf62ccfa927c86425fc81c15: Status 404 returned error can't find the container with id 71dbe920f226b1625f6379fdefc6dae2aa60e4ceaf62ccfa927c86425fc81c15
	Nov 19 22:56:04 old-k8s-version-191961 kubelet[1383]: W1119 22:56:04.047058    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-823dd084ad1eec316ec7824e41ab3b67b34783a16fc16000682305a89e1edd40 WatchSource:0}: Error finding container 823dd084ad1eec316ec7824e41ab3b67b34783a16fc16000682305a89e1edd40: Status 404 returned error can't find the container with id 823dd084ad1eec316ec7824e41ab3b67b34783a16fc16000682305a89e1edd40
	Nov 19 22:56:08 old-k8s-version-191961 kubelet[1383]: I1119 22:56:08.541337    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rkdfn" podStartSLOduration=6.541291063 podCreationTimestamp="2025-11-19 22:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:56:04.566379266 +0000 UTC m=+14.773762788" watchObservedRunningTime="2025-11-19 22:56:08.541291063 +0000 UTC m=+18.748674585"
	Nov 19 22:56:08 old-k8s-version-191961 kubelet[1383]: I1119 22:56:08.541955    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-dtpd4" podStartSLOduration=2.593450812 podCreationTimestamp="2025-11-19 22:56:02 +0000 UTC" firstStartedPulling="2025-11-19 22:56:03.938319562 +0000 UTC m=+14.145703084" lastFinishedPulling="2025-11-19 22:56:07.886794649 +0000 UTC m=+18.094178170" observedRunningTime="2025-11-19 22:56:08.540063649 +0000 UTC m=+18.747447171" watchObservedRunningTime="2025-11-19 22:56:08.541925898 +0000 UTC m=+18.749309420"
	Nov 19 22:56:18 old-k8s-version-191961 kubelet[1383]: I1119 22:56:18.828402    1383 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 22:56:18 old-k8s-version-191961 kubelet[1383]: I1119 22:56:18.863026    1383 topology_manager.go:215] "Topology Admit Handler" podUID="a5d9076c-6dc5-4069-8b3c-3cd6f314a341" podNamespace="kube-system" podName="coredns-5dd5756b68-sf6gl"
	Nov 19 22:56:18 old-k8s-version-191961 kubelet[1383]: I1119 22:56:18.875632    1383 topology_manager.go:215] "Topology Admit Handler" podUID="d53ec514-54c2-484c-abbb-f57fb0107bb1" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 22:56:18 old-k8s-version-191961 kubelet[1383]: I1119 22:56:18.914010    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5d9076c-6dc5-4069-8b3c-3cd6f314a341-config-volume\") pod \"coredns-5dd5756b68-sf6gl\" (UID: \"a5d9076c-6dc5-4069-8b3c-3cd6f314a341\") " pod="kube-system/coredns-5dd5756b68-sf6gl"
	Nov 19 22:56:18 old-k8s-version-191961 kubelet[1383]: I1119 22:56:18.914333    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxshs\" (UniqueName: \"kubernetes.io/projected/a5d9076c-6dc5-4069-8b3c-3cd6f314a341-kube-api-access-rxshs\") pod \"coredns-5dd5756b68-sf6gl\" (UID: \"a5d9076c-6dc5-4069-8b3c-3cd6f314a341\") " pod="kube-system/coredns-5dd5756b68-sf6gl"
	Nov 19 22:56:19 old-k8s-version-191961 kubelet[1383]: I1119 22:56:19.015173    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d53ec514-54c2-484c-abbb-f57fb0107bb1-tmp\") pod \"storage-provisioner\" (UID: \"d53ec514-54c2-484c-abbb-f57fb0107bb1\") " pod="kube-system/storage-provisioner"
	Nov 19 22:56:19 old-k8s-version-191961 kubelet[1383]: I1119 22:56:19.015386    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7gv\" (UniqueName: \"kubernetes.io/projected/d53ec514-54c2-484c-abbb-f57fb0107bb1-kube-api-access-mp7gv\") pod \"storage-provisioner\" (UID: \"d53ec514-54c2-484c-abbb-f57fb0107bb1\") " pod="kube-system/storage-provisioner"
	Nov 19 22:56:19 old-k8s-version-191961 kubelet[1383]: W1119 22:56:19.184342    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-8d45f63486d7e10c61ec5411da5e077d6ca6f4f180a366d07cc66f8431ffadbd WatchSource:0}: Error finding container 8d45f63486d7e10c61ec5411da5e077d6ca6f4f180a366d07cc66f8431ffadbd: Status 404 returned error can't find the container with id 8d45f63486d7e10c61ec5411da5e077d6ca6f4f180a366d07cc66f8431ffadbd
	Nov 19 22:56:19 old-k8s-version-191961 kubelet[1383]: W1119 22:56:19.238172    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-ffd3e4c82d0d818bb626eecc1cd3b23d75b1c53726297211127e0ca63d7abc3e WatchSource:0}: Error finding container ffd3e4c82d0d818bb626eecc1cd3b23d75b1c53726297211127e0ca63d7abc3e: Status 404 returned error can't find the container with id ffd3e4c82d0d818bb626eecc1cd3b23d75b1c53726297211127e0ca63d7abc3e
	Nov 19 22:56:19 old-k8s-version-191961 kubelet[1383]: I1119 22:56:19.575379    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.575327661 podCreationTimestamp="2025-11-19 22:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:56:19.560758698 +0000 UTC m=+29.768142229" watchObservedRunningTime="2025-11-19 22:56:19.575327661 +0000 UTC m=+29.782711191"
	Nov 19 22:56:21 old-k8s-version-191961 kubelet[1383]: I1119 22:56:21.833304    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-sf6gl" podStartSLOduration=19.833264985 podCreationTimestamp="2025-11-19 22:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:56:19.577601285 +0000 UTC m=+29.784984815" watchObservedRunningTime="2025-11-19 22:56:21.833264985 +0000 UTC m=+32.040648507"
	Nov 19 22:56:21 old-k8s-version-191961 kubelet[1383]: I1119 22:56:21.833463    1383 topology_manager.go:215] "Topology Admit Handler" podUID="d7049a7e-c4f1-41aa-b250-36991037c143" podNamespace="default" podName="busybox"
	Nov 19 22:56:21 old-k8s-version-191961 kubelet[1383]: I1119 22:56:21.933205    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-668wf\" (UniqueName: \"kubernetes.io/projected/d7049a7e-c4f1-41aa-b250-36991037c143-kube-api-access-668wf\") pod \"busybox\" (UID: \"d7049a7e-c4f1-41aa-b250-36991037c143\") " pod="default/busybox"
	Nov 19 22:56:22 old-k8s-version-191961 kubelet[1383]: W1119 22:56:22.188199    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-7b84e74ba51dcb01aab0ab73b74faeaa765c49bbc525194ff72dc0928c88f15e WatchSource:0}: Error finding container 7b84e74ba51dcb01aab0ab73b74faeaa765c49bbc525194ff72dc0928c88f15e: Status 404 returned error can't find the container with id 7b84e74ba51dcb01aab0ab73b74faeaa765c49bbc525194ff72dc0928c88f15e
	
	
	==> storage-provisioner [0f99b05fe5691b5a133953240d67225a127f5230986cb30af8b28f7fd60e1a44] <==
	I1119 22:56:19.242947       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:56:19.269524       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:56:19.269624       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:56:19.284808       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:56:19.284981       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-191961_dad24292-c6c3-4aa2-98ad-b3d41deab59f!
	I1119 22:56:19.285212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b686f98b-4f98-4c94-964a-7a5f07bf7388", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-191961_dad24292-c6c3-4aa2-98ad-b3d41deab59f became leader
	I1119 22:56:19.385462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-191961_dad24292-c6c3-4aa2-98ad-b3d41deab59f!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191961 -n old-k8s-version-191961
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-191961 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (252.789439ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:56:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-018508 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-018508 describe deploy/metrics-server -n kube-system: exit status 1 (128.078064ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-018508 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-018508
helpers_test.go:243: (dbg) docker inspect no-preload-018508:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90",
	        "Created": "2025-11-19T22:55:31.446403274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1054061,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:55:31.55319524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/hostname",
	        "HostsPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/hosts",
	        "LogPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90-json.log",
	        "Name": "/no-preload-018508",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-018508:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-018508",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90",
	                "LowerDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-018508",
	                "Source": "/var/lib/docker/volumes/no-preload-018508/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-018508",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-018508",
	                "name.minikube.sigs.k8s.io": "no-preload-018508",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8704932c28a5b79c4cd68a362ade786e65d73e90730fcec93dcf7dd1d8d57c56",
	            "SandboxKey": "/var/run/docker/netns/8704932c28a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33850"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-018508": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:fc:7a:c7:9f:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecb686e72be7045ea0f2163632862012f1ddc546b19b453f0aeaec0f227ef432",
	                    "EndpointID": "3649f87d05522e3845cf1a0d3bad4d8766f8da48654a32a6c32745c9b0262a7f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-018508",
	                        "9259db4142ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-018508 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-018508 logs -n 25: (1.375388175s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-334366 sudo containerd config dump                                                                                                                                                                                                  │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ ssh     │ -p cilium-334366 sudo crio config                                                                                                                                                                                                             │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │                     │
	│ delete  │ -p cilium-334366                                                                                                                                                                                                                              │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:50 UTC │
	│ start   │ -p force-systemd-env-860026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:54 UTC │
	│ delete  │ -p force-systemd-env-860026                                                                                                                                                                                                                   │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:52 UTC │
	│ delete  │ -p kubernetes-upgrade-154655                                                                                                                                                                                                                  │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:54 UTC │
	│ start   │ -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ cert-options-110863 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ -p cert-options-110863 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ delete  │ -p cert-options-110863                                                                                                                                                                                                                        │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p old-k8s-version-191961 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:56:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:56:45.217361 1058620 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:56:45.217590 1058620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:56:45.217632 1058620 out.go:374] Setting ErrFile to fd 2...
	I1119 22:56:45.217709 1058620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:56:45.218739 1058620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:56:45.221851 1058620 out.go:368] Setting JSON to false
	I1119 22:56:45.224353 1058620 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16734,"bootTime":1763576271,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:56:45.224629 1058620 start.go:143] virtualization:  
	I1119 22:56:45.228636 1058620 out.go:179] * [old-k8s-version-191961] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:56:45.234694 1058620 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:56:45.234699 1058620 notify.go:221] Checking for updates...
	I1119 22:56:45.238935 1058620 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:56:45.247076 1058620 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:56:45.250786 1058620 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:56:45.254061 1058620 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:56:45.258082 1058620 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:56:45.262410 1058620 config.go:182] Loaded profile config "old-k8s-version-191961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:56:45.267527 1058620 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1119 22:56:45.271259 1058620 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:56:45.305709 1058620 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:56:45.305836 1058620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:56:45.364888 1058620 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:56:45.355765377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:56:45.364991 1058620 docker.go:319] overlay module found
	I1119 22:56:45.368300 1058620 out.go:179] * Using the docker driver based on existing profile
	I1119 22:56:45.371246 1058620 start.go:309] selected driver: docker
	I1119 22:56:45.371266 1058620 start.go:930] validating driver "docker" against &{Name:old-k8s-version-191961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-191961 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:56:45.371499 1058620 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:56:45.372408 1058620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:56:45.429920 1058620 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:56:45.421263613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:56:45.430260 1058620 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:56:45.430294 1058620 cni.go:84] Creating CNI manager for ""
	I1119 22:56:45.430354 1058620 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:56:45.430392 1058620 start.go:353] cluster config:
	{Name:old-k8s-version-191961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-191961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:56:45.433542 1058620 out.go:179] * Starting "old-k8s-version-191961" primary control-plane node in "old-k8s-version-191961" cluster
	I1119 22:56:45.436325 1058620 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:56:45.439228 1058620 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:56:45.442092 1058620 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 22:56:45.442141 1058620 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1119 22:56:45.442172 1058620 cache.go:65] Caching tarball of preloaded images
	I1119 22:56:45.442172 1058620 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:56:45.442254 1058620 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 22:56:45.442264 1058620 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1119 22:56:45.442375 1058620 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/config.json ...
	I1119 22:56:45.471563 1058620 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:56:45.471646 1058620 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:56:45.471669 1058620 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:56:45.471732 1058620 start.go:360] acquireMachinesLock for old-k8s-version-191961: {Name:mk883aea5f3f27b3d830224d4184a817f9737c63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:56:45.471815 1058620 start.go:364] duration metric: took 59.939µs to acquireMachinesLock for "old-k8s-version-191961"
	I1119 22:56:45.471836 1058620 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:56:45.471853 1058620 fix.go:54] fixHost starting: 
	I1119 22:56:45.472142 1058620 cli_runner.go:164] Run: docker container inspect old-k8s-version-191961 --format={{.State.Status}}
	I1119 22:56:45.493347 1058620 fix.go:112] recreateIfNeeded on old-k8s-version-191961: state=Stopped err=<nil>
	W1119 22:56:45.493377 1058620 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 19 22:56:37 no-preload-018508 crio[843]: time="2025-11-19T22:56:37.317505423Z" level=info msg="Created container fb8e193ef8a316507ef7aab9adf73a1d8800df93dcb8c43b54766c7a7ac1ce2d: kube-system/coredns-66bc5c9577-rxhmf/coredns" id=31ad35ef-96cc-49bb-9bfe-850313ae8844 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:56:37 no-preload-018508 crio[843]: time="2025-11-19T22:56:37.318626407Z" level=info msg="Starting container: fb8e193ef8a316507ef7aab9adf73a1d8800df93dcb8c43b54766c7a7ac1ce2d" id=6cd83df5-fb49-42da-9a22-f590bf2d9160 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:56:37 no-preload-018508 crio[843]: time="2025-11-19T22:56:37.332317285Z" level=info msg="Started container" PID=2521 containerID=fb8e193ef8a316507ef7aab9adf73a1d8800df93dcb8c43b54766c7a7ac1ce2d description=kube-system/coredns-66bc5c9577-rxhmf/coredns id=6cd83df5-fb49-42da-9a22-f590bf2d9160 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ad8d1a18dc017b9148a67c0ef4a59b6f10844f1eba2e9790e556ce8d585c477
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.94160797Z" level=info msg="Running pod sandbox: default/busybox/POD" id=93119479-e724-425b-97ea-741e2fd8acbb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.941682358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.95094219Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3c2c437fbe4289f6b8344c6531967301a255757a936becba59389e5f046eb06a UID:bde3dd99-1e55-4b1c-bc02-a95506e986c0 NetNS:/var/run/netns/0cc2dbcb-74c5-485c-9399-31d38bfb8b52 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400136e8a8}] Aliases:map[]}"
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.950982666Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.964110808Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3c2c437fbe4289f6b8344c6531967301a255757a936becba59389e5f046eb06a UID:bde3dd99-1e55-4b1c-bc02-a95506e986c0 NetNS:/var/run/netns/0cc2dbcb-74c5-485c-9399-31d38bfb8b52 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400136e8a8}] Aliases:map[]}"
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.964266837Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.96725506Z" level=info msg="Ran pod sandbox 3c2c437fbe4289f6b8344c6531967301a255757a936becba59389e5f046eb06a with infra container: default/busybox/POD" id=93119479-e724-425b-97ea-741e2fd8acbb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.969556696Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b73118ba-6914-4ff7-aa5d-eb1277658e8d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.969764993Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b73118ba-6914-4ff7-aa5d-eb1277658e8d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.96987381Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b73118ba-6914-4ff7-aa5d-eb1277658e8d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.971476628Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c65b4639-bdf4-486f-aec5-3371c429c0bd name=/runtime.v1.ImageService/PullImage
	Nov 19 22:56:39 no-preload-018508 crio[843]: time="2025-11-19T22:56:39.972634937Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.930767234Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c65b4639-bdf4-486f-aec5-3371c429c0bd name=/runtime.v1.ImageService/PullImage
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.931320707Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0aa66ab-dfb3-4d7c-af39-8dc51a8f4f10 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.934400039Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00d94983-c784-46fb-9313-af3f73bbf38b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.943244016Z" level=info msg="Creating container: default/busybox/busybox" id=92fa64b7-49ef-4b3b-8b6b-aef20715f209 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.943361366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.948257959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.948743993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.963255923Z" level=info msg="Created container a7465aafbeebddb26c98174a777b77c31aafd291767fee13a8ed07094085d622: default/busybox/busybox" id=92fa64b7-49ef-4b3b-8b6b-aef20715f209 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.964888378Z" level=info msg="Starting container: a7465aafbeebddb26c98174a777b77c31aafd291767fee13a8ed07094085d622" id=cc16cc3e-13ac-4b21-9307-a0c6a2b3688c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:56:41 no-preload-018508 crio[843]: time="2025-11-19T22:56:41.966775037Z" level=info msg="Started container" PID=2582 containerID=a7465aafbeebddb26c98174a777b77c31aafd291767fee13a8ed07094085d622 description=default/busybox/busybox id=cc16cc3e-13ac-4b21-9307-a0c6a2b3688c name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c2c437fbe4289f6b8344c6531967301a255757a936becba59389e5f046eb06a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a7465aafbeebd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   3c2c437fbe428       busybox                                     default
	fb8e193ef8a31       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   4ad8d1a18dc01       coredns-66bc5c9577-rxhmf                    kube-system
	5a62fc9c5179b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   30f5d33ab879e       storage-provisioner                         kube-system
	69741bce7853f       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   6d6c2720f0a47       kindnet-2n4sq                               kube-system
	170346bd98c64       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   defb7b8dad04e       kube-proxy-pn4pw                            kube-system
	b3eb806799dab       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      40 seconds ago      Running             kube-scheduler            0                   35aa3a73fe715       kube-scheduler-no-preload-018508            kube-system
	812c17d05fa70       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      40 seconds ago      Running             kube-controller-manager   0                   aeb787a76ffd6       kube-controller-manager-no-preload-018508   kube-system
	b7c78e658ffbf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      40 seconds ago      Running             kube-apiserver            0                   dc12d3f3374c9       kube-apiserver-no-preload-018508            kube-system
	b5b90e9417054       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      40 seconds ago      Running             etcd                      0                   709dccca11877       etcd-no-preload-018508                      kube-system
	
	
	==> coredns [fb8e193ef8a316507ef7aab9adf73a1d8800df93dcb8c43b54766c7a7ac1ce2d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58162 - 22677 "HINFO IN 8686339483044377647.4328170891919819400. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003739497s
	
	
	==> describe nodes <==
	Name:               no-preload-018508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-018508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-018508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_56_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:56:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-018508
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:56:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:56:48 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:56:48 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:56:48 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:56:48 +0000   Wed, 19 Nov 2025 22:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-018508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                09a9f2b2-499b-4381-b448-723471f1496f
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-rxhmf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-018508                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-2n4sq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-018508             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-018508    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-pn4pw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-018508             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 27s   kube-proxy       
	  Normal   Starting                 33s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s   kubelet          Node no-preload-018508 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s   kubelet          Node no-preload-018508 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s   kubelet          Node no-preload-018508 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s   node-controller  Node no-preload-018508 event: Registered Node no-preload-018508 in Controller
	  Normal   NodeReady                14s   kubelet          Node no-preload-018508 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 22:26] overlayfs: idmapped layers are currently not supported
	[Nov19 22:31] overlayfs: idmapped layers are currently not supported
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b5b90e941705419c73861d16446b8ca544da34acb9cd5d4f9d218200163ea54a] <==
	{"level":"warn","ts":"2025-11-19T22:56:12.731111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.756141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.776542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.795233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.825569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.838614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.859015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.871606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.889021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.918513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.958073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.960712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.980817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:12.990918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.013089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.031480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.051931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.062582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.080498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.097702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.121425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.147088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.178028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.201831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:56:13.369876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:56:50 up  4:38,  0 user,  load average: 3.15, 2.64, 2.28
	Linux no-preload-018508 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69741bce7853f3ade3890dbf00e02f821899143273243b6fe20eb69c1a0310af] <==
	I1119 22:56:26.327898       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:56:26.419183       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:56:26.419472       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:56:26.419525       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:56:26.419551       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:56:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:56:26.623200       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:56:26.623304       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:56:26.623341       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:56:26.624846       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:56:26.924110       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:56:26.924138       1 metrics.go:72] Registering metrics
	I1119 22:56:26.924200       1 controller.go:711] "Syncing nftables rules"
	I1119 22:56:36.623067       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:56:36.623110       1 main.go:301] handling current node
	I1119 22:56:46.619912       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:56:46.619949       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7c78e658ffbf37de7fbbb682ab2f098718b9eefb5ffa8b6be0e2471934ac9f0] <==
	I1119 22:56:14.580789       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:56:14.580796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:56:14.580803       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:56:14.608659       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:56:14.613135       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:56:14.629180       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:56:14.652588       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:56:14.653532       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:56:15.255223       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:56:15.261878       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:56:15.261904       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:56:16.149675       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:56:16.200629       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:56:16.340052       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:56:16.347375       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 22:56:16.348494       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:56:16.353424       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:56:16.396707       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:56:17.419294       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:56:17.435471       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:56:17.448820       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:56:22.215194       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:56:22.222426       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:56:22.302307       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:56:22.412032       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [812c17d05fa70b3b847e09e62c9d91edd13dfece9b953986e76f4f90c7a0b6d7] <==
	I1119 22:56:21.402171       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:56:21.404216       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:56:21.405400       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:56:21.411430       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-018508" podCIDRs=["10.244.0.0/24"]
	I1119 22:56:21.413643       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:56:21.414598       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:56:21.422996       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:56:21.423080       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:56:21.423178       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-018508"
	I1119 22:56:21.423246       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:56:21.433309       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:56:21.439594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:56:21.443258       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:56:21.444513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:56:21.444542       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:56:21.444549       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:56:21.444942       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:56:21.445549       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:56:21.445659       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:56:21.445722       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:56:21.446099       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:56:21.446339       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:56:21.445733       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:56:21.457784       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:56:41.426370       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [170346bd98c646012029ae8ae66e329adf005d36a4502099c42c07db1fe6b097] <==
	I1119 22:56:23.045845       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:56:23.204592       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:56:23.309625       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:56:23.309671       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:56:23.309755       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:56:23.427922       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:56:23.427973       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:56:23.437305       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:56:23.437668       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:56:23.437687       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:56:23.444910       1 config.go:200] "Starting service config controller"
	I1119 22:56:23.444929       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:56:23.444949       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:56:23.444954       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:56:23.444965       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:56:23.444969       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:56:23.447953       1 config.go:309] "Starting node config controller"
	I1119 22:56:23.447967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:56:23.447974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:56:23.546084       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:56:23.546124       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:56:23.546248       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b3eb806799dabc2467df8f6d3f0ca937238282a72f64edd67988f4218e71e1d4] <==
	I1119 22:56:15.669186       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:56:15.681052       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:56:15.681750       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:56:15.682121       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:56:15.682505       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 22:56:15.687355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:56:15.691463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:56:15.691634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:56:15.693614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:56:15.693772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:56:15.700103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:56:15.700235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:56:15.700289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:56:15.700357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:56:15.700775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:56:15.700788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:56:15.700843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:56:15.700858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:56:15.700886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:56:15.700922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:56:15.701001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:56:15.701114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:56:15.701129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:56:15.701177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1119 22:56:17.283132       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:56:18 no-preload-018508 kubelet[2031]: I1119 22:56:18.624448    2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-018508" podStartSLOduration=1.62442572 podStartE2EDuration="1.62442572s" podCreationTimestamp="2025-11-19 22:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:56:18.604982623 +0000 UTC m=+1.366378145" watchObservedRunningTime="2025-11-19 22:56:18.62442572 +0000 UTC m=+1.385821422"
	Nov 19 22:56:21 no-preload-018508 kubelet[2031]: I1119 22:56:21.471964    2031 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:56:21 no-preload-018508 kubelet[2031]: I1119 22:56:21.472847    2031 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425103    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2162b2bd-a5f2-4538-8a48-79a0246a58eb-kube-proxy\") pod \"kube-proxy-pn4pw\" (UID: \"2162b2bd-a5f2-4538-8a48-79a0246a58eb\") " pod="kube-system/kube-proxy-pn4pw"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425314    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2162b2bd-a5f2-4538-8a48-79a0246a58eb-lib-modules\") pod \"kube-proxy-pn4pw\" (UID: \"2162b2bd-a5f2-4538-8a48-79a0246a58eb\") " pod="kube-system/kube-proxy-pn4pw"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425439    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2162b2bd-a5f2-4538-8a48-79a0246a58eb-xtables-lock\") pod \"kube-proxy-pn4pw\" (UID: \"2162b2bd-a5f2-4538-8a48-79a0246a58eb\") " pod="kube-system/kube-proxy-pn4pw"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425556    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c7558d8-6110-4c54-851c-23315a8a713c-xtables-lock\") pod \"kindnet-2n4sq\" (UID: \"0c7558d8-6110-4c54-851c-23315a8a713c\") " pod="kube-system/kindnet-2n4sq"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425664    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c7558d8-6110-4c54-851c-23315a8a713c-lib-modules\") pod \"kindnet-2n4sq\" (UID: \"0c7558d8-6110-4c54-851c-23315a8a713c\") " pod="kube-system/kindnet-2n4sq"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425811    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk9sl\" (UniqueName: \"kubernetes.io/projected/2162b2bd-a5f2-4538-8a48-79a0246a58eb-kube-api-access-hk9sl\") pod \"kube-proxy-pn4pw\" (UID: \"2162b2bd-a5f2-4538-8a48-79a0246a58eb\") " pod="kube-system/kube-proxy-pn4pw"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425881    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0c7558d8-6110-4c54-851c-23315a8a713c-cni-cfg\") pod \"kindnet-2n4sq\" (UID: \"0c7558d8-6110-4c54-851c-23315a8a713c\") " pod="kube-system/kindnet-2n4sq"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.425917    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86tzs\" (UniqueName: \"kubernetes.io/projected/0c7558d8-6110-4c54-851c-23315a8a713c-kube-api-access-86tzs\") pod \"kindnet-2n4sq\" (UID: \"0c7558d8-6110-4c54-851c-23315a8a713c\") " pod="kube-system/kindnet-2n4sq"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: I1119 22:56:22.570281    2031 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: W1119 22:56:22.668649    2031 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/crio-defb7b8dad04e96d90abe5322788cfdeb200f77f93f601c7b63a87c1614e663b WatchSource:0}: Error finding container defb7b8dad04e96d90abe5322788cfdeb200f77f93f601c7b63a87c1614e663b: Status 404 returned error can't find the container with id defb7b8dad04e96d90abe5322788cfdeb200f77f93f601c7b63a87c1614e663b
	Nov 19 22:56:22 no-preload-018508 kubelet[2031]: W1119 22:56:22.671849    2031 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/crio-6d6c2720f0a47ecbf2aed2369805411a002e029ed80152fe73c26c507e8cba87 WatchSource:0}: Error finding container 6d6c2720f0a47ecbf2aed2369805411a002e029ed80152fe73c26c507e8cba87: Status 404 returned error can't find the container with id 6d6c2720f0a47ecbf2aed2369805411a002e029ed80152fe73c26c507e8cba87
	Nov 19 22:56:23 no-preload-018508 kubelet[2031]: I1119 22:56:23.726325    2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pn4pw" podStartSLOduration=1.7263058 podStartE2EDuration="1.7263058s" podCreationTimestamp="2025-11-19 22:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:56:23.627207306 +0000 UTC m=+6.388602836" watchObservedRunningTime="2025-11-19 22:56:23.7263058 +0000 UTC m=+6.487701330"
	Nov 19 22:56:29 no-preload-018508 kubelet[2031]: I1119 22:56:29.259166    2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2n4sq" podStartSLOduration=3.738224867 podStartE2EDuration="7.259148647s" podCreationTimestamp="2025-11-19 22:56:22 +0000 UTC" firstStartedPulling="2025-11-19 22:56:22.682264602 +0000 UTC m=+5.443660124" lastFinishedPulling="2025-11-19 22:56:26.203188382 +0000 UTC m=+8.964583904" observedRunningTime="2025-11-19 22:56:26.651579781 +0000 UTC m=+9.412975311" watchObservedRunningTime="2025-11-19 22:56:29.259148647 +0000 UTC m=+12.020544202"
	Nov 19 22:56:36 no-preload-018508 kubelet[2031]: I1119 22:56:36.871706    2031 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:56:36 no-preload-018508 kubelet[2031]: I1119 22:56:36.939470    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6c83bdb2-ec25-44e1-ae35-77e4dea28165-tmp\") pod \"storage-provisioner\" (UID: \"6c83bdb2-ec25-44e1-ae35-77e4dea28165\") " pod="kube-system/storage-provisioner"
	Nov 19 22:56:36 no-preload-018508 kubelet[2031]: I1119 22:56:36.939707    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hnc\" (UniqueName: \"kubernetes.io/projected/6c83bdb2-ec25-44e1-ae35-77e4dea28165-kube-api-access-z5hnc\") pod \"storage-provisioner\" (UID: \"6c83bdb2-ec25-44e1-ae35-77e4dea28165\") " pod="kube-system/storage-provisioner"
	Nov 19 22:56:36 no-preload-018508 kubelet[2031]: I1119 22:56:36.939833    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71cc8e54-484a-403e-bda7-a4e70390d4c0-config-volume\") pod \"coredns-66bc5c9577-rxhmf\" (UID: \"71cc8e54-484a-403e-bda7-a4e70390d4c0\") " pod="kube-system/coredns-66bc5c9577-rxhmf"
	Nov 19 22:56:36 no-preload-018508 kubelet[2031]: I1119 22:56:36.939903    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvfr\" (UniqueName: \"kubernetes.io/projected/71cc8e54-484a-403e-bda7-a4e70390d4c0-kube-api-access-2dvfr\") pod \"coredns-66bc5c9577-rxhmf\" (UID: \"71cc8e54-484a-403e-bda7-a4e70390d4c0\") " pod="kube-system/coredns-66bc5c9577-rxhmf"
	Nov 19 22:56:37 no-preload-018508 kubelet[2031]: W1119 22:56:37.281618    2031 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/crio-4ad8d1a18dc017b9148a67c0ef4a59b6f10844f1eba2e9790e556ce8d585c477 WatchSource:0}: Error finding container 4ad8d1a18dc017b9148a67c0ef4a59b6f10844f1eba2e9790e556ce8d585c477: Status 404 returned error can't find the container with id 4ad8d1a18dc017b9148a67c0ef4a59b6f10844f1eba2e9790e556ce8d585c477
	Nov 19 22:56:37 no-preload-018508 kubelet[2031]: I1119 22:56:37.710590    2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rxhmf" podStartSLOduration=15.710570819 podStartE2EDuration="15.710570819s" podCreationTimestamp="2025-11-19 22:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:56:37.675929615 +0000 UTC m=+20.437325145" watchObservedRunningTime="2025-11-19 22:56:37.710570819 +0000 UTC m=+20.471966349"
	Nov 19 22:56:39 no-preload-018508 kubelet[2031]: I1119 22:56:39.631805    2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.631767911 podStartE2EDuration="15.631767911s" podCreationTimestamp="2025-11-19 22:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:56:37.762283298 +0000 UTC m=+20.523678836" watchObservedRunningTime="2025-11-19 22:56:39.631767911 +0000 UTC m=+22.393163441"
	Nov 19 22:56:39 no-preload-018508 kubelet[2031]: I1119 22:56:39.660260    2031 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72dxg\" (UniqueName: \"kubernetes.io/projected/bde3dd99-1e55-4b1c-bc02-a95506e986c0-kube-api-access-72dxg\") pod \"busybox\" (UID: \"bde3dd99-1e55-4b1c-bc02-a95506e986c0\") " pod="default/busybox"
	
	
	==> storage-provisioner [5a62fc9c5179b5f69c382f7311f030af11d4d0fdcbf0c2ebdb9fe87cab320545] <==
	I1119 22:56:37.286037       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:56:37.305774       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:56:37.305905       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:56:37.308872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:37.316558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:56:37.316812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:56:37.317061       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-018508_4b42d21f-be9f-41bf-8818-dc27541da5b9!
	I1119 22:56:37.323766       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f174fb19-ca6a-48a2-8622-e239a84010c4", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-018508_4b42d21f-be9f-41bf-8818-dc27541da5b9 became leader
	W1119 22:56:37.323920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:37.344111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:56:37.421417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-018508_4b42d21f-be9f-41bf-8818-dc27541da5b9!
	W1119 22:56:39.347421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:39.352146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:41.355157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:41.359399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:43.362308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:43.366637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:45.369565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:45.373888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:47.377651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:47.383442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:49.386178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:56:49.393281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018508 -n no-preload-018508
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-018508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-191961 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-191961 --alsologtostderr -v=1: exit status 80 (1.839914658s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-191961 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:57:51.127931 1063735 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:57:51.128080 1063735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:57:51.128092 1063735 out.go:374] Setting ErrFile to fd 2...
	I1119 22:57:51.128120 1063735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:57:51.128841 1063735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:57:51.129183 1063735 out.go:368] Setting JSON to false
	I1119 22:57:51.129242 1063735 mustload.go:66] Loading cluster: old-k8s-version-191961
	I1119 22:57:51.129704 1063735 config.go:182] Loaded profile config "old-k8s-version-191961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:57:51.130223 1063735 cli_runner.go:164] Run: docker container inspect old-k8s-version-191961 --format={{.State.Status}}
	I1119 22:57:51.149041 1063735 host.go:66] Checking if "old-k8s-version-191961" exists ...
	I1119 22:57:51.150312 1063735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:57:51.208663 1063735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:57:51.198899135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:57:51.209436 1063735 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-191961 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:57:51.212915 1063735 out.go:179] * Pausing node old-k8s-version-191961 ... 
	I1119 22:57:51.216723 1063735 host.go:66] Checking if "old-k8s-version-191961" exists ...
	I1119 22:57:51.217087 1063735 ssh_runner.go:195] Run: systemctl --version
	I1119 22:57:51.217187 1063735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-191961
	I1119 22:57:51.235954 1063735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33851 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/old-k8s-version-191961/id_rsa Username:docker}
	I1119 22:57:51.337855 1063735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:57:51.357776 1063735 pause.go:52] kubelet running: true
	I1119 22:57:51.357857 1063735 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:57:51.614379 1063735 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:57:51.614468 1063735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:57:51.682431 1063735 cri.go:89] found id: "7c81dcbf758f34246c6eb872955ba1903e0b6b3d9cf8cdf578aa6ba6198b72ac"
	I1119 22:57:51.682453 1063735 cri.go:89] found id: "66d481d552c0c92d846bfcde284f4bbca87eb63d685d51037b2d199683b3b8b3"
	I1119 22:57:51.682457 1063735 cri.go:89] found id: "b56f2b24a58be4362c56c2784b7237094d790158c7341ea0cadd8dff6a5cf241"
	I1119 22:57:51.682461 1063735 cri.go:89] found id: "48938cdfbab01455db270b6a2524cd6415ce23a3701805784912fac3f64e75b3"
	I1119 22:57:51.682465 1063735 cri.go:89] found id: "991962c053d8c37f8fb7d52404d9d0e4a26ce40375bc807670a69e5e309d0e10"
	I1119 22:57:51.682468 1063735 cri.go:89] found id: "40ffa36b8db7d4bddf7eec2be93374f85c68b4f8475b1ca8f95a6e259bc4b4ec"
	I1119 22:57:51.682471 1063735 cri.go:89] found id: "dc473b93b033b07c30f493568e843909bf72c0923b3edcfa7b790acdcd5d2734"
	I1119 22:57:51.682475 1063735 cri.go:89] found id: "d5661c576bd48805df3310a42259b25f3d6358219721fabef991e12173d9f4d0"
	I1119 22:57:51.682478 1063735 cri.go:89] found id: "a0adb72d131f7a3e37e9659ed410147e226ae0a9c56505fa2588b71340e71ecd"
	I1119 22:57:51.682488 1063735 cri.go:89] found id: "1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8"
	I1119 22:57:51.682492 1063735 cri.go:89] found id: "15f172e1e9015f511d2f9658c286a7d2e124c2f8d7ce49a0480611b9338af010"
	I1119 22:57:51.682495 1063735 cri.go:89] found id: ""
	I1119 22:57:51.682543 1063735 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:57:51.702466 1063735 retry.go:31] will retry after 261.809075ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:57:51Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:57:51.964976 1063735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:57:51.977875 1063735 pause.go:52] kubelet running: false
	I1119 22:57:51.977941 1063735 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:57:52.171134 1063735 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:57:52.171277 1063735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:57:52.238447 1063735 cri.go:89] found id: "7c81dcbf758f34246c6eb872955ba1903e0b6b3d9cf8cdf578aa6ba6198b72ac"
	I1119 22:57:52.238518 1063735 cri.go:89] found id: "66d481d552c0c92d846bfcde284f4bbca87eb63d685d51037b2d199683b3b8b3"
	I1119 22:57:52.238552 1063735 cri.go:89] found id: "b56f2b24a58be4362c56c2784b7237094d790158c7341ea0cadd8dff6a5cf241"
	I1119 22:57:52.238577 1063735 cri.go:89] found id: "48938cdfbab01455db270b6a2524cd6415ce23a3701805784912fac3f64e75b3"
	I1119 22:57:52.238597 1063735 cri.go:89] found id: "991962c053d8c37f8fb7d52404d9d0e4a26ce40375bc807670a69e5e309d0e10"
	I1119 22:57:52.238633 1063735 cri.go:89] found id: "40ffa36b8db7d4bddf7eec2be93374f85c68b4f8475b1ca8f95a6e259bc4b4ec"
	I1119 22:57:52.238657 1063735 cri.go:89] found id: "dc473b93b033b07c30f493568e843909bf72c0923b3edcfa7b790acdcd5d2734"
	I1119 22:57:52.238681 1063735 cri.go:89] found id: "d5661c576bd48805df3310a42259b25f3d6358219721fabef991e12173d9f4d0"
	I1119 22:57:52.238719 1063735 cri.go:89] found id: "a0adb72d131f7a3e37e9659ed410147e226ae0a9c56505fa2588b71340e71ecd"
	I1119 22:57:52.238747 1063735 cri.go:89] found id: "1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8"
	I1119 22:57:52.238770 1063735 cri.go:89] found id: "15f172e1e9015f511d2f9658c286a7d2e124c2f8d7ce49a0480611b9338af010"
	I1119 22:57:52.238805 1063735 cri.go:89] found id: ""
	I1119 22:57:52.238926 1063735 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:57:52.250024 1063735 retry.go:31] will retry after 350.565075ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:57:52Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:57:52.601711 1063735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:57:52.616616 1063735 pause.go:52] kubelet running: false
	I1119 22:57:52.616687 1063735 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:57:52.797794 1063735 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:57:52.797879 1063735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:57:52.873493 1063735 cri.go:89] found id: "7c81dcbf758f34246c6eb872955ba1903e0b6b3d9cf8cdf578aa6ba6198b72ac"
	I1119 22:57:52.873516 1063735 cri.go:89] found id: "66d481d552c0c92d846bfcde284f4bbca87eb63d685d51037b2d199683b3b8b3"
	I1119 22:57:52.873521 1063735 cri.go:89] found id: "b56f2b24a58be4362c56c2784b7237094d790158c7341ea0cadd8dff6a5cf241"
	I1119 22:57:52.873526 1063735 cri.go:89] found id: "48938cdfbab01455db270b6a2524cd6415ce23a3701805784912fac3f64e75b3"
	I1119 22:57:52.873539 1063735 cri.go:89] found id: "991962c053d8c37f8fb7d52404d9d0e4a26ce40375bc807670a69e5e309d0e10"
	I1119 22:57:52.873544 1063735 cri.go:89] found id: "40ffa36b8db7d4bddf7eec2be93374f85c68b4f8475b1ca8f95a6e259bc4b4ec"
	I1119 22:57:52.873548 1063735 cri.go:89] found id: "dc473b93b033b07c30f493568e843909bf72c0923b3edcfa7b790acdcd5d2734"
	I1119 22:57:52.873552 1063735 cri.go:89] found id: "d5661c576bd48805df3310a42259b25f3d6358219721fabef991e12173d9f4d0"
	I1119 22:57:52.873556 1063735 cri.go:89] found id: "a0adb72d131f7a3e37e9659ed410147e226ae0a9c56505fa2588b71340e71ecd"
	I1119 22:57:52.873562 1063735 cri.go:89] found id: "1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8"
	I1119 22:57:52.873576 1063735 cri.go:89] found id: "15f172e1e9015f511d2f9658c286a7d2e124c2f8d7ce49a0480611b9338af010"
	I1119 22:57:52.873582 1063735 cri.go:89] found id: ""
	I1119 22:57:52.873631 1063735 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:57:52.888829 1063735 out.go:203] 
	W1119 22:57:52.891849 1063735 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:57:52.891881 1063735 out.go:285] * 
	* 
	W1119 22:57:52.900803 1063735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:57:52.903623 1063735 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-191961 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-191961
helpers_test.go:243: (dbg) docker inspect old-k8s-version-191961:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee",
	        "Created": "2025-11-19T22:55:13.430692279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1058752,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:56:45.530168221Z",
	            "FinishedAt": "2025-11-19T22:56:44.534949289Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/hosts",
	        "LogPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee-json.log",
	        "Name": "/old-k8s-version-191961",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-191961:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-191961",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee",
	                "LowerDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-191961",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-191961/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-191961",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-191961",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-191961",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e2580d9d8bcb4dc74e6d7e4adaec49f42dfbf838f387c168b1846820ea4053a",
	            "SandboxKey": "/var/run/docker/netns/5e2580d9d8bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33851"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33852"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33853"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-191961": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:5b:58:65:22:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47f03f83c3fe719b80c42f4da32b57adc2e9e8ee352f6eea7c164878ce0bc301",
	                    "EndpointID": "678b0e0a7035e7c3e8a6945ad3b99792c0c1943f1d76cda27bd634edf3fed170",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-191961",
	                        "e6ae989c9f99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961: exit status 2 (380.552236ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-191961 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-191961 logs -n 25: (1.408878715s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-334366                                                                                                                                                                                                                              │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:50 UTC │
	│ start   │ -p force-systemd-env-860026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:54 UTC │
	│ delete  │ -p force-systemd-env-860026                                                                                                                                                                                                                   │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:52 UTC │
	│ delete  │ -p kubernetes-upgrade-154655                                                                                                                                                                                                                  │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:54 UTC │
	│ start   │ -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ cert-options-110863 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ -p cert-options-110863 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ delete  │ -p cert-options-110863                                                                                                                                                                                                                        │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p old-k8s-version-191961 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                                                                                               │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:57:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:57:04.031783 1061187 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:57:04.031928 1061187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:57:04.031940 1061187 out.go:374] Setting ErrFile to fd 2...
	I1119 22:57:04.031947 1061187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:57:04.032256 1061187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:57:04.032677 1061187 out.go:368] Setting JSON to false
	I1119 22:57:04.033710 1061187 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16753,"bootTime":1763576271,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:57:04.033785 1061187 start.go:143] virtualization:  
	I1119 22:57:04.038853 1061187 out.go:179] * [no-preload-018508] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:57:04.042028 1061187 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:57:04.042079 1061187 notify.go:221] Checking for updates...
	I1119 22:57:04.048980 1061187 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:57:04.051980 1061187 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:57:04.054966 1061187 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:57:04.057869 1061187 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:57:04.060807 1061187 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:57:04.064300 1061187 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:57:04.065051 1061187 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:57:04.090371 1061187 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:57:04.090488 1061187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:57:04.156245 1061187 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:57:04.145851236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:57:04.156374 1061187 docker.go:319] overlay module found
	I1119 22:57:04.159525 1061187 out.go:179] * Using the docker driver based on existing profile
	I1119 22:57:04.162482 1061187 start.go:309] selected driver: docker
	I1119 22:57:04.162503 1061187 start.go:930] validating driver "docker" against &{Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:57:04.162602 1061187 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:57:04.163449 1061187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:57:04.217966 1061187 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:57:04.20883729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:57:04.218316 1061187 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:57:04.218350 1061187 cni.go:84] Creating CNI manager for ""
	I1119 22:57:04.218408 1061187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:57:04.218452 1061187 start.go:353] cluster config:
	{Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:57:04.223350 1061187 out.go:179] * Starting "no-preload-018508" primary control-plane node in "no-preload-018508" cluster
	I1119 22:57:04.226177 1061187 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:57:04.229180 1061187 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:57:04.232113 1061187 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:57:04.232214 1061187 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:57:04.232263 1061187 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/config.json ...
	I1119 22:57:04.232625 1061187 cache.go:107] acquiring lock: {Name:mk180e474f04af563cbfcf1e6f1ac0d968064e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232721 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 22:57:04.232736 1061187 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.625µs
	I1119 22:57:04.232751 1061187 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 22:57:04.232764 1061187 cache.go:107] acquiring lock: {Name:mk2b339ab9bb06155cf46e99d17bcad78cd42ce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232801 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 22:57:04.232810 1061187 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 48.033µs
	I1119 22:57:04.232817 1061187 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 22:57:04.232827 1061187 cache.go:107] acquiring lock: {Name:mkeb8164ef4491f0dac349eed28d827e1ab20310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232859 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 22:57:04.232868 1061187 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.281µs
	I1119 22:57:04.232875 1061187 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 22:57:04.232884 1061187 cache.go:107] acquiring lock: {Name:mk60a33fac3c62a01332ec72da7be7d237eebaf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232915 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 22:57:04.232924 1061187 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 41.321µs
	I1119 22:57:04.232931 1061187 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 22:57:04.232945 1061187 cache.go:107] acquiring lock: {Name:mkf092cd9edaf9fd2c691350815b05d694be6ba4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232971 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 22:57:04.232980 1061187 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.191µs
	I1119 22:57:04.232986 1061187 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 22:57:04.232995 1061187 cache.go:107] acquiring lock: {Name:mk202931c6624db071b2edd07b2fea5bfea95f34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.233025 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1119 22:57:04.233034 1061187 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.934µs
	I1119 22:57:04.233041 1061187 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 22:57:04.233049 1061187 cache.go:107] acquiring lock: {Name:mk29595f21458d904bc2d24173d38f20affcf328 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.233079 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 22:57:04.233088 1061187 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.582µs
	I1119 22:57:04.233138 1061187 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 22:57:04.233162 1061187 cache.go:107] acquiring lock: {Name:mk5ba73f1f86578edab04675b64317e89203f7a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.233229 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 22:57:04.233239 1061187 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 89.74µs
	I1119 22:57:04.233245 1061187 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 22:57:04.233252 1061187 cache.go:87] Successfully saved all images to host disk.
	I1119 22:57:04.253662 1061187 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:57:04.253682 1061187 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:57:04.253694 1061187 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:57:04.253716 1061187 start.go:360] acquireMachinesLock for no-preload-018508: {Name:mk5707a3ba7045dab1a444980a59ede7567f2c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.253766 1061187 start.go:364] duration metric: took 34.478µs to acquireMachinesLock for "no-preload-018508"
	I1119 22:57:04.253788 1061187 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:57:04.253794 1061187 fix.go:54] fixHost starting: 
	I1119 22:57:04.254047 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:04.284745 1061187 fix.go:112] recreateIfNeeded on no-preload-018508: state=Stopped err=<nil>
	W1119 22:57:04.284782 1061187 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:57:01.276894 1058620 addons.go:515] duration metric: took 6.669515044s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1119 22:57:01.280202 1058620 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:57:01.281679 1058620 api_server.go:141] control plane version: v1.28.0
	I1119 22:57:01.281710 1058620 api_server.go:131] duration metric: took 13.850422ms to wait for apiserver health ...
	I1119 22:57:01.281721 1058620 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:57:01.286048 1058620 system_pods.go:59] 8 kube-system pods found
	I1119 22:57:01.286086 1058620 system_pods.go:61] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:01.286098 1058620 system_pods.go:61] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:01.286105 1058620 system_pods.go:61] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:57:01.286114 1058620 system_pods.go:61] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:01.286121 1058620 system_pods.go:61] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:01.286141 1058620 system_pods.go:61] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:57:01.286148 1058620 system_pods.go:61] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:01.286153 1058620 system_pods.go:61] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Running
	I1119 22:57:01.286170 1058620 system_pods.go:74] duration metric: took 4.442116ms to wait for pod list to return data ...
	I1119 22:57:01.286179 1058620 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:57:01.289357 1058620 default_sa.go:45] found service account: "default"
	I1119 22:57:01.289385 1058620 default_sa.go:55] duration metric: took 3.200048ms for default service account to be created ...
	I1119 22:57:01.289396 1058620 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:57:01.293351 1058620 system_pods.go:86] 8 kube-system pods found
	I1119 22:57:01.293386 1058620 system_pods.go:89] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:01.293399 1058620 system_pods.go:89] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:01.293405 1058620 system_pods.go:89] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:57:01.293412 1058620 system_pods.go:89] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:01.293419 1058620 system_pods.go:89] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:01.293429 1058620 system_pods.go:89] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:57:01.293436 1058620 system_pods.go:89] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:01.293448 1058620 system_pods.go:89] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Running
	I1119 22:57:01.293456 1058620 system_pods.go:126] duration metric: took 4.054823ms to wait for k8s-apps to be running ...
	I1119 22:57:01.293465 1058620 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:57:01.293521 1058620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:57:01.309255 1058620 system_svc.go:56] duration metric: took 15.77983ms WaitForService to wait for kubelet
	I1119 22:57:01.309336 1058620 kubeadm.go:587] duration metric: took 6.7024287s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:57:01.309371 1058620 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:57:01.313566 1058620 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:57:01.313646 1058620 node_conditions.go:123] node cpu capacity is 2
	I1119 22:57:01.313675 1058620 node_conditions.go:105] duration metric: took 4.264228ms to run NodePressure ...
	I1119 22:57:01.313720 1058620 start.go:242] waiting for startup goroutines ...
	I1119 22:57:01.313747 1058620 start.go:247] waiting for cluster config update ...
	I1119 22:57:01.313774 1058620 start.go:256] writing updated cluster config ...
	I1119 22:57:01.314109 1058620 ssh_runner.go:195] Run: rm -f paused
	I1119 22:57:01.319408 1058620 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:57:01.324566 1058620 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-sf6gl" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:57:03.331150 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:04.288150 1061187 out.go:252] * Restarting existing docker container for "no-preload-018508" ...
	I1119 22:57:04.288246 1061187 cli_runner.go:164] Run: docker start no-preload-018508
	I1119 22:57:04.547827 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:04.574469 1061187 kic.go:430] container "no-preload-018508" state is running.
	I1119 22:57:04.574913 1061187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:57:04.599712 1061187 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/config.json ...
	I1119 22:57:04.599937 1061187 machine.go:94] provisionDockerMachine start ...
	I1119 22:57:04.600004 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:04.622473 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:04.622830 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:04.622840 1061187 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:57:04.623475 1061187 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38968->127.0.0.1:33856: read: connection reset by peer
	I1119 22:57:07.770624 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-018508
	
	I1119 22:57:07.770650 1061187 ubuntu.go:182] provisioning hostname "no-preload-018508"
	I1119 22:57:07.770712 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:07.787857 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:07.788199 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:07.788217 1061187 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-018508 && echo "no-preload-018508" | sudo tee /etc/hostname
	I1119 22:57:07.941527 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-018508
	
	I1119 22:57:07.941628 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:07.959380 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:07.959687 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:07.959710 1061187 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018508/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:57:08.107538 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:57:08.107564 1061187 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:57:08.107589 1061187 ubuntu.go:190] setting up certificates
	I1119 22:57:08.107600 1061187 provision.go:84] configureAuth start
	I1119 22:57:08.107662 1061187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:57:08.126156 1061187 provision.go:143] copyHostCerts
	I1119 22:57:08.126267 1061187 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:57:08.126290 1061187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:57:08.126396 1061187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:57:08.126557 1061187 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:57:08.126570 1061187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:57:08.126607 1061187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:57:08.126718 1061187 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:57:08.126729 1061187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:57:08.126763 1061187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:57:08.126845 1061187 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.no-preload-018508 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-018508]
	I1119 22:57:08.805102 1061187 provision.go:177] copyRemoteCerts
	I1119 22:57:08.805175 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:57:08.805216 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:08.826566 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:08.931363 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:57:08.951029 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:57:08.971638 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:57:08.990851 1061187 provision.go:87] duration metric: took 883.228354ms to configureAuth
	I1119 22:57:08.990901 1061187 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:57:08.991116 1061187 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:57:08.991234 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.011138 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:09.011518 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:09.011558 1061187 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1119 22:57:05.830911 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:07.831041 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:09.386190 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:57:09.386256 1061187 machine.go:97] duration metric: took 4.786306358s to provisionDockerMachine
	I1119 22:57:09.386285 1061187 start.go:293] postStartSetup for "no-preload-018508" (driver="docker")
	I1119 22:57:09.386330 1061187 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:57:09.386440 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:57:09.386503 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.410059 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.515009 1061187 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:57:09.518358 1061187 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:57:09.518386 1061187 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:57:09.518398 1061187 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:57:09.518449 1061187 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:57:09.518537 1061187 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:57:09.518651 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:57:09.526220 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:57:09.544684 1061187 start.go:296] duration metric: took 158.365953ms for postStartSetup
	I1119 22:57:09.544763 1061187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:57:09.544822 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.562260 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.660029 1061187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:57:09.664958 1061187 fix.go:56] duration metric: took 5.41115647s for fixHost
	I1119 22:57:09.664982 1061187 start.go:83] releasing machines lock for "no-preload-018508", held for 5.411207859s
	I1119 22:57:09.665058 1061187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:57:09.682907 1061187 ssh_runner.go:195] Run: cat /version.json
	I1119 22:57:09.682963 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.683055 1061187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:57:09.683124 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.708164 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.715225 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.806406 1061187 ssh_runner.go:195] Run: systemctl --version
	I1119 22:57:09.904624 1061187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:57:09.941999 1061187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:57:09.946308 1061187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:57:09.946389 1061187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:57:09.954397 1061187 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:57:09.954419 1061187 start.go:496] detecting cgroup driver to use...
	I1119 22:57:09.954451 1061187 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:57:09.954498 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:57:09.971537 1061187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:57:09.984558 1061187 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:57:09.984670 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:57:10.000866 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:57:10.017965 1061187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:57:10.148189 1061187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:57:10.273950 1061187 docker.go:234] disabling docker service ...
	I1119 22:57:10.274072 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:57:10.288999 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:57:10.302390 1061187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:57:10.418686 1061187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:57:10.544989 1061187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:57:10.559056 1061187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:57:10.573714 1061187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:57:10.573804 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.583803 1061187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:57:10.583871 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.593246 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.602563 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.611649 1061187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:57:10.620223 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.629541 1061187 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.638320 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.647257 1061187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:57:10.655056 1061187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:57:10.662503 1061187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:57:10.788625 1061187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:57:10.969669 1061187 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:57:10.969743 1061187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:57:10.974557 1061187 start.go:564] Will wait 60s for crictl version
	I1119 22:57:10.974631 1061187 ssh_runner.go:195] Run: which crictl
	I1119 22:57:10.978427 1061187 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:57:11.007254 1061187 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:57:11.007371 1061187 ssh_runner.go:195] Run: crio --version
	I1119 22:57:11.051672 1061187 ssh_runner.go:195] Run: crio --version
	I1119 22:57:11.123753 1061187 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:57:11.126625 1061187 cli_runner.go:164] Run: docker network inspect no-preload-018508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:57:11.144816 1061187 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:57:11.151118 1061187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:57:11.166649 1061187 kubeadm.go:884] updating cluster {Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:57:11.166784 1061187 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:57:11.166834 1061187 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:57:11.231528 1061187 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:57:11.231553 1061187 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:57:11.231561 1061187 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:57:11.231657 1061187 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-018508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:57:11.231740 1061187 ssh_runner.go:195] Run: crio config
	I1119 22:57:11.328927 1061187 cni.go:84] Creating CNI manager for ""
	I1119 22:57:11.328954 1061187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:57:11.328971 1061187 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:57:11.328994 1061187 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018508 NodeName:no-preload-018508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:57:11.329129 1061187 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018508"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:57:11.329214 1061187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:57:11.339279 1061187 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:57:11.339357 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:57:11.349159 1061187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 22:57:11.365849 1061187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:57:11.385466 1061187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:57:11.409315 1061187 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:57:11.412934 1061187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:57:11.427670 1061187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:57:11.581123 1061187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:57:11.598425 1061187 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508 for IP: 192.168.85.2
	I1119 22:57:11.598447 1061187 certs.go:195] generating shared ca certs ...
	I1119 22:57:11.598463 1061187 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:11.598601 1061187 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 22:57:11.598653 1061187 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 22:57:11.598666 1061187 certs.go:257] generating profile certs ...
	I1119 22:57:11.598749 1061187 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.key
	I1119 22:57:11.598819 1061187 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key.7c4af07e
	I1119 22:57:11.598860 1061187 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key
	I1119 22:57:11.599002 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 22:57:11.599041 1061187 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 22:57:11.599054 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:57:11.599083 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:57:11.599110 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:57:11.599141 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 22:57:11.599185 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:57:11.599781 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:57:11.625510 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:57:11.660077 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:57:11.709464 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 22:57:11.751942 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:57:11.770186 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:57:11.795339 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:57:11.829029 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:57:11.853542 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 22:57:11.889106 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:57:11.910851 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 22:57:11.933584 1061187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:57:11.947766 1061187 ssh_runner.go:195] Run: openssl version
	I1119 22:57:11.956358 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 22:57:11.965742 1061187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 22:57:11.971749 1061187 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 22:57:11.971832 1061187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 22:57:12.020082 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:57:12.030140 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:57:12.041442 1061187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:57:12.046324 1061187 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:57:12.046404 1061187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:57:12.100852 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:57:12.109930 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 22:57:12.119574 1061187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 22:57:12.123922 1061187 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 22:57:12.124046 1061187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 22:57:12.169044 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 22:57:12.178144 1061187 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:57:12.197626 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:57:12.248888 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:57:12.342860 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:57:12.441384 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:57:12.553705 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:57:12.639403 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:57:12.864893 1061187 kubeadm.go:401] StartCluster: {Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:57:12.865018 1061187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:57:12.865102 1061187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:57:13.030532 1061187 cri.go:89] found id: "8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d"
	I1119 22:57:13.030690 1061187 cri.go:89] found id: "1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08"
	I1119 22:57:13.030713 1061187 cri.go:89] found id: "0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f"
	I1119 22:57:13.030762 1061187 cri.go:89] found id: "0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1"
	I1119 22:57:13.030783 1061187 cri.go:89] found id: ""
	I1119 22:57:13.030906 1061187 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:57:13.079279 1061187 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:57:13Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:57:13.079445 1061187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:57:13.106678 1061187 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:57:13.106736 1061187 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:57:13.106841 1061187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:57:13.131899 1061187 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:57:13.132590 1061187 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-018508" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:57:13.132921 1061187 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-860325/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-018508" cluster setting kubeconfig missing "no-preload-018508" context setting]
	I1119 22:57:13.133483 1061187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:13.135298 1061187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:57:13.149458 1061187 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:57:13.149538 1061187 kubeadm.go:602] duration metric: took 42.771543ms to restartPrimaryControlPlane
	I1119 22:57:13.149561 1061187 kubeadm.go:403] duration metric: took 284.678413ms to StartCluster
	I1119 22:57:13.149606 1061187 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:13.149703 1061187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:57:13.150804 1061187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:13.151100 1061187 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:57:13.151662 1061187 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:57:13.151608 1061187 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:57:13.151840 1061187 addons.go:70] Setting storage-provisioner=true in profile "no-preload-018508"
	I1119 22:57:13.151882 1061187 addons.go:239] Setting addon storage-provisioner=true in "no-preload-018508"
	W1119 22:57:13.151915 1061187 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:57:13.152013 1061187 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:57:13.151845 1061187 addons.go:70] Setting dashboard=true in profile "no-preload-018508"
	I1119 22:57:13.152135 1061187 addons.go:239] Setting addon dashboard=true in "no-preload-018508"
	W1119 22:57:13.152143 1061187 addons.go:248] addon dashboard should already be in state true
	I1119 22:57:13.152168 1061187 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:57:13.152597 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.152755 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.151855 1061187 addons.go:70] Setting default-storageclass=true in profile "no-preload-018508"
	I1119 22:57:13.153393 1061187 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018508"
	I1119 22:57:13.153657 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.157252 1061187 out.go:179] * Verifying Kubernetes components...
	I1119 22:57:13.163391 1061187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:57:13.195731 1061187 addons.go:239] Setting addon default-storageclass=true in "no-preload-018508"
	W1119 22:57:13.195755 1061187 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:57:13.195779 1061187 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:57:13.196184 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.214071 1061187 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:57:13.217021 1061187 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:57:13.217043 1061187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:57:13.217111 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:13.221819 1061187 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:57:13.229130 1061187 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:57:13.232866 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:57:13.232893 1061187 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:57:13.232976 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:13.250247 1061187 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:57:13.250272 1061187 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:57:13.250335 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:13.255385 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:13.292560 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:13.298979 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:13.629899 1061187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:57:13.714280 1061187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:57:13.718667 1061187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:57:13.738121 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:57:13.738184 1061187 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:57:13.835108 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:57:13.835173 1061187 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:57:13.939435 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:57:13.939502 1061187 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1119 22:57:10.331375 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:12.837000 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:14.062539 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:57:14.062573 1061187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:57:14.201324 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:57:14.201352 1061187 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:57:14.241353 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:57:14.241425 1061187 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:57:14.287522 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:57:14.287597 1061187 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:57:14.330201 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:57:14.330272 1061187 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:57:14.355070 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:57:14.355151 1061187 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:57:14.392506 1061187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1119 22:57:15.333834 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:17.334585 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:19.337505 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:23.851447 1061187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.221512783s)
	I1119 22:57:23.851506 1061187 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.137163535s)
	I1119 22:57:23.851542 1061187 node_ready.go:35] waiting up to 6m0s for node "no-preload-018508" to be "Ready" ...
	I1119 22:57:23.851844 1061187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.133101385s)
	I1119 22:57:23.882583 1061187 node_ready.go:49] node "no-preload-018508" is "Ready"
	I1119 22:57:23.882825 1061187 node_ready.go:38] duration metric: took 31.248524ms for node "no-preload-018508" to be "Ready" ...
	I1119 22:57:23.882959 1061187 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:57:23.883069 1061187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:57:23.898117 1061187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.505521144s)
	I1119 22:57:23.901659 1061187 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-018508 addons enable metrics-server
	
	I1119 22:57:23.904886 1061187 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1119 22:57:23.907967 1061187 addons.go:515] duration metric: took 10.756350881s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1119 22:57:23.928641 1061187 api_server.go:72] duration metric: took 10.777483975s to wait for apiserver process to appear ...
	I1119 22:57:23.928710 1061187 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:57:23.928746 1061187 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:57:23.943766 1061187 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:57:23.945283 1061187 api_server.go:141] control plane version: v1.34.1
	I1119 22:57:23.945358 1061187 api_server.go:131] duration metric: took 16.624401ms to wait for apiserver health ...
	I1119 22:57:23.945383 1061187 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:57:23.955062 1061187 system_pods.go:59] 8 kube-system pods found
	I1119 22:57:23.955147 1061187 system_pods.go:61] "coredns-66bc5c9577-rxhmf" [71cc8e54-484a-403e-bda7-a4e70390d4c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:23.955183 1061187 system_pods.go:61] "etcd-no-preload-018508" [8f8923b4-736d-447c-8130-1615741a5ca8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:23.955221 1061187 system_pods.go:61] "kindnet-2n4sq" [0c7558d8-6110-4c54-851c-23315a8a713c] Running
	I1119 22:57:23.955260 1061187 system_pods.go:61] "kube-apiserver-no-preload-018508" [3c2ba7c2-c96b-4683-9e19-d6dc92c78be7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:23.955286 1061187 system_pods.go:61] "kube-controller-manager-no-preload-018508" [72bd7185-6053-4ea8-ad99-8f6bebd83526] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:23.955315 1061187 system_pods.go:61] "kube-proxy-pn4pw" [2162b2bd-a5f2-4538-8a48-79a0246a58eb] Running
	I1119 22:57:23.955346 1061187 system_pods.go:61] "kube-scheduler-no-preload-018508" [a65aee10-0318-4615-aaba-4e411683dd55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:23.955372 1061187 system_pods.go:61] "storage-provisioner" [6c83bdb2-ec25-44e1-ae35-77e4dea28165] Running
	I1119 22:57:23.955398 1061187 system_pods.go:74] duration metric: took 9.992335ms to wait for pod list to return data ...
	I1119 22:57:23.955421 1061187 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:57:23.963716 1061187 default_sa.go:45] found service account: "default"
	I1119 22:57:23.963783 1061187 default_sa.go:55] duration metric: took 8.330006ms for default service account to be created ...
	I1119 22:57:23.963807 1061187 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:57:23.969465 1061187 system_pods.go:86] 8 kube-system pods found
	I1119 22:57:23.969547 1061187 system_pods.go:89] "coredns-66bc5c9577-rxhmf" [71cc8e54-484a-403e-bda7-a4e70390d4c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:23.969572 1061187 system_pods.go:89] "etcd-no-preload-018508" [8f8923b4-736d-447c-8130-1615741a5ca8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:23.969612 1061187 system_pods.go:89] "kindnet-2n4sq" [0c7558d8-6110-4c54-851c-23315a8a713c] Running
	I1119 22:57:23.969642 1061187 system_pods.go:89] "kube-apiserver-no-preload-018508" [3c2ba7c2-c96b-4683-9e19-d6dc92c78be7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:23.969669 1061187 system_pods.go:89] "kube-controller-manager-no-preload-018508" [72bd7185-6053-4ea8-ad99-8f6bebd83526] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:23.969695 1061187 system_pods.go:89] "kube-proxy-pn4pw" [2162b2bd-a5f2-4538-8a48-79a0246a58eb] Running
	I1119 22:57:23.969728 1061187 system_pods.go:89] "kube-scheduler-no-preload-018508" [a65aee10-0318-4615-aaba-4e411683dd55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:23.969754 1061187 system_pods.go:89] "storage-provisioner" [6c83bdb2-ec25-44e1-ae35-77e4dea28165] Running
	I1119 22:57:23.969781 1061187 system_pods.go:126] duration metric: took 5.950615ms to wait for k8s-apps to be running ...
	I1119 22:57:23.969808 1061187 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:57:23.969889 1061187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:57:23.992462 1061187 system_svc.go:56] duration metric: took 22.646262ms WaitForService to wait for kubelet
	I1119 22:57:23.992494 1061187 kubeadm.go:587] duration metric: took 10.841342411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:57:23.992522 1061187 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:57:23.997356 1061187 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:57:23.997390 1061187 node_conditions.go:123] node cpu capacity is 2
	I1119 22:57:23.997403 1061187 node_conditions.go:105] duration metric: took 4.874727ms to run NodePressure ...
	I1119 22:57:23.997430 1061187 start.go:242] waiting for startup goroutines ...
	I1119 22:57:23.997441 1061187 start.go:247] waiting for cluster config update ...
	I1119 22:57:23.997459 1061187 start.go:256] writing updated cluster config ...
	I1119 22:57:23.997795 1061187 ssh_runner.go:195] Run: rm -f paused
	I1119 22:57:24.002699 1061187 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:57:24.009232 1061187 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rxhmf" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:57:21.350797 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:23.832992 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:26.016733 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:28.515866 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:26.332016 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:28.336766 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:31.027488 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:33.515870 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:30.830716 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:32.831069 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:34.831548 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:35.518012 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:38.016484 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:37.331586 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:37.830571 1058620 pod_ready.go:94] pod "coredns-5dd5756b68-sf6gl" is "Ready"
	I1119 22:57:37.830594 1058620 pod_ready.go:86] duration metric: took 36.505956168s for pod "coredns-5dd5756b68-sf6gl" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.833866 1058620 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.838819 1058620 pod_ready.go:94] pod "etcd-old-k8s-version-191961" is "Ready"
	I1119 22:57:37.838849 1058620 pod_ready.go:86] duration metric: took 4.956147ms for pod "etcd-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.841873 1058620 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.848192 1058620 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-191961" is "Ready"
	I1119 22:57:37.848220 1058620 pod_ready.go:86] duration metric: took 6.324912ms for pod "kube-apiserver-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.851336 1058620 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.029464 1058620 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-191961" is "Ready"
	I1119 22:57:38.029545 1058620 pod_ready.go:86] duration metric: took 178.185891ms for pod "kube-controller-manager-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.229396 1058620 pod_ready.go:83] waiting for pod "kube-proxy-rkdfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.628053 1058620 pod_ready.go:94] pod "kube-proxy-rkdfn" is "Ready"
	I1119 22:57:38.628080 1058620 pod_ready.go:86] duration metric: took 398.658899ms for pod "kube-proxy-rkdfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.829389 1058620 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:39.228679 1058620 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-191961" is "Ready"
	I1119 22:57:39.228710 1058620 pod_ready.go:86] duration metric: took 399.295902ms for pod "kube-scheduler-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:39.228723 1058620 pod_ready.go:40] duration metric: took 37.909221709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:57:39.285450 1058620 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1119 22:57:39.288759 1058620 out.go:203] 
	W1119 22:57:39.291794 1058620 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:57:39.294693 1058620 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:57:39.297633 1058620 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-191961" cluster and "default" namespace by default
	W1119 22:57:40.026290 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:42.515278 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:45.029634 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:47.515581 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.933014196Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.937069118Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.937104835Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.93712717Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.941313433Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.941356141Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.941383193Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.945713933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.945750938Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.945802598Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.950325799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.950364404Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.859432713Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b74aa6b-3718-4a9c-b7d1-8571e1968f4d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.860623647Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d7f24f87-2562-475a-84b2-2378c36c8bd0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.861959657Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper" id=df1b22ef-8001-47cb-9dc0-8ff6dfde7cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.86208408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.870405003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.871167471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.891954341Z" level=info msg="Created container 1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper" id=df1b22ef-8001-47cb-9dc0-8ff6dfde7cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.89448487Z" level=info msg="Starting container: 1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8" id=09db5cb4-af55-4da1-85b2-f37d496fb249 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.896852773Z" level=info msg="Started container" PID=1753 containerID=1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper id=09db5cb4-af55-4da1-85b2-f37d496fb249 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168
	Nov 19 22:57:42 old-k8s-version-191961 conmon[1751]: conmon 1e677e29cd3752fa7ed7 <ninfo>: container 1753 exited with status 1
	Nov 19 22:57:43 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:43.138804558Z" level=info msg="Removing container: f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4" id=7d0dbfe5-c287-4a3d-b399-5bf64fa17fd3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:57:43 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:43.147460376Z" level=info msg="Error loading conmon cgroup of container f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4: cgroup deleted" id=7d0dbfe5-c287-4a3d-b399-5bf64fa17fd3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:57:43 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:43.152684448Z" level=info msg="Removed container f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper" id=7d0dbfe5-c287-4a3d-b399-5bf64fa17fd3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1e677e29cd375       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   ad7030e403338       dashboard-metrics-scraper-5f989dc9cf-4f2bw       kubernetes-dashboard
	7c81dcbf758f3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   0a11f185687e1       storage-provisioner                              kube-system
	15f172e1e9015       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   58a7803e26d91       kubernetes-dashboard-8694d4445c-mxfnk            kubernetes-dashboard
	66d481d552c0c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago      Running             coredns                     1                   26b92c6d51781       coredns-5dd5756b68-sf6gl                         kube-system
	32c78aa0c20f9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   31aeaf1cc12fd       busybox                                          default
	b56f2b24a58be       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   0a11f185687e1       storage-provisioner                              kube-system
	48938cdfbab01       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   7a9f55e4d3b94       kindnet-dtpd4                                    kube-system
	991962c053d8c       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago      Running             kube-proxy                  1                   aa85be79d32fe       kube-proxy-rkdfn                                 kube-system
	40ffa36b8db7d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   473147fd51446       kube-controller-manager-old-k8s-version-191961   kube-system
	dc473b93b033b       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   4b3bd4fb25b14       kube-scheduler-old-k8s-version-191961            kube-system
	d5661c576bd48       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   736208b20b905       etcd-old-k8s-version-191961                      kube-system
	a0adb72d131f7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   74d992276ebe5       kube-apiserver-old-k8s-version-191961            kube-system
	
	
	==> coredns [66d481d552c0c92d846bfcde284f4bbca87eb63d685d51037b2d199683b3b8b3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51358 - 13150 "HINFO IN 3754183133172108303.7751547629821104851. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032745549s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-191961
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-191961
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-191961
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_55_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:55:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-191961
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-191961
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                a586ad19-3112-4f7e-a794-67583869230e
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-sf6gl                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-191961                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-dtpd4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-191961             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-191961    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-rkdfn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-191961             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4f2bw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mxfnk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x8 over 2m17s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-191961 event: Registered Node old-k8s-version-191961 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-191961 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-191961 event: Registered Node old-k8s-version-191961 in Controller
	
	
	==> dmesg <==
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d5661c576bd48805df3310a42259b25f3d6358219721fabef991e12173d9f4d0] <==
	{"level":"info","ts":"2025-11-19T22:56:54.559204Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:56:54.559212Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:56:54.559424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T22:56:54.55948Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T22:56:54.559558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:56:54.559584Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:56:54.567271Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T22:56:54.571416Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T22:56:54.571451Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T22:56:54.571185Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:56:54.571478Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:56:56.134896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T22:56:56.135011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:56:56.135067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:56:56.135107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.135141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.135181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.135227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.137645Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-191961 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:56:56.137827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:56:56.138911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-19T22:56:56.146894Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:56:56.147555Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:56:56.14759Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:56:56.148076Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:57:54 up  4:40,  0 user,  load average: 4.33, 3.21, 2.51
	Linux old-k8s-version-191961 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [48938cdfbab01455db270b6a2524cd6415ce23a3701805784912fac3f64e75b3] <==
	I1119 22:57:00.697707       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:57:00.698031       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:57:00.698196       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:57:00.698209       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:57:00.698223       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:57:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:57:00.925691       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:57:00.925710       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:57:00.925719       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:57:00.926024       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:57:30.927538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:57:30.927650       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 22:57:30.927807       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:57:30.930250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1119 22:57:32.426393       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:57:32.426434       1 metrics.go:72] Registering metrics
	I1119 22:57:32.426487       1 controller.go:711] "Syncing nftables rules"
	I1119 22:57:40.926053       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:57:40.926769       1 main.go:301] handling current node
	I1119 22:57:50.931778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:57:50.931809       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0adb72d131f7a3e37e9659ed410147e226ae0a9c56505fa2588b71340e71ecd] <==
	I1119 22:56:58.601317       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 22:56:58.941640       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:56:58.991823       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:56:59.001068       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 22:56:59.001163       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 22:56:59.008144       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 22:56:59.010559       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:56:59.018815       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:56:59.027088       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 22:56:59.027864       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 22:56:59.028669       1 aggregator.go:166] initial CRD sync complete...
	I1119 22:56:59.028725       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:56:59.028755       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:56:59.028791       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:56:59.594751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:57:01.065968       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:57:01.117356       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:57:01.145161       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:57:01.155912       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:57:01.169727       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:57:01.233903       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.236.52"}
	I1119 22:57:01.259580       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.23.152"}
	I1119 22:57:11.194481       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:57:11.260267       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:57:11.337748       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [40ffa36b8db7d4bddf7eec2be93374f85c68b4f8475b1ca8f95a6e259bc4b4ec] <==
	I1119 22:57:11.222481       1 shared_informer.go:318] Caches are synced for endpoint
	I1119 22:57:11.231845       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1119 22:57:11.255357       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:57:11.265127       1 shared_informer.go:318] Caches are synced for persistent volume
	I1119 22:57:11.276784       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mxfnk"
	I1119 22:57:11.276948       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	I1119 22:57:11.306330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.31081ms"
	I1119 22:57:11.307043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.868964ms"
	I1119 22:57:11.414271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="107.141732ms"
	I1119 22:57:11.414402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.39µs"
	I1119 22:57:11.418372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.864µs"
	I1119 22:57:11.418607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.154035ms"
	I1119 22:57:11.448881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="30.244814ms"
	I1119 22:57:11.448973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.856µs"
	I1119 22:57:11.702494       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:57:11.702521       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:57:11.722211       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:57:20.081788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.496398ms"
	I1119 22:57:20.082029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.942µs"
	I1119 22:57:28.119132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.413µs"
	I1119 22:57:29.122211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.684µs"
	I1119 22:57:30.126853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.108µs"
	I1119 22:57:37.586428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.328929ms"
	I1119 22:57:37.586810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.924µs"
	I1119 22:57:43.157002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.809µs"
	
	
	==> kube-proxy [991962c053d8c37f8fb7d52404d9d0e4a26ce40375bc807670a69e5e309d0e10] <==
	I1119 22:57:00.799989       1 server_others.go:69] "Using iptables proxy"
	I1119 22:57:00.823013       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:57:00.869307       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:57:00.904200       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:57:00.904248       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:57:00.904257       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:57:00.904285       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:57:00.904527       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:57:00.904543       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:57:00.915899       1 config.go:188] "Starting service config controller"
	I1119 22:57:00.917004       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:57:00.917111       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:57:00.917143       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:57:00.918041       1 config.go:315] "Starting node config controller"
	I1119 22:57:00.920244       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:57:01.019829       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:57:01.019929       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:57:01.020363       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [dc473b93b033b07c30f493568e843909bf72c0923b3edcfa7b790acdcd5d2734] <==
	I1119 22:56:58.151820       1 serving.go:348] Generated self-signed cert in-memory
	I1119 22:56:59.411338       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 22:56:59.411447       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:56:59.416304       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 22:56:59.416511       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1119 22:56:59.416558       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1119 22:56:59.416606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 22:56:59.418331       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:56:59.423693       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 22:56:59.422983       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:56:59.424072       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:56:59.517161       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1119 22:56:59.524680       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:56:59.524742       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.300363     794 topology_manager.go:215] "Topology Admit Handler" podUID="4bdd98f3-4299-4444-93ee-dbf5f3d503ed" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-mxfnk"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.318668     794 topology_manager.go:215] "Topology Admit Handler" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428279     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/320ae2e5-a523-4c14-9c8d-277f0d7218a2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4f2bw\" (UID: \"320ae2e5-a523-4c14-9c8d-277f0d7218a2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428502     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2sm7\" (UniqueName: \"kubernetes.io/projected/320ae2e5-a523-4c14-9c8d-277f0d7218a2-kube-api-access-b2sm7\") pod \"dashboard-metrics-scraper-5f989dc9cf-4f2bw\" (UID: \"320ae2e5-a523-4c14-9c8d-277f0d7218a2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428697     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bdd98f3-4299-4444-93ee-dbf5f3d503ed-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mxfnk\" (UID: \"4bdd98f3-4299-4444-93ee-dbf5f3d503ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mxfnk"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428802     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvxl\" (UniqueName: \"kubernetes.io/projected/4bdd98f3-4299-4444-93ee-dbf5f3d503ed-kube-api-access-jnvxl\") pod \"kubernetes-dashboard-8694d4445c-mxfnk\" (UID: \"4bdd98f3-4299-4444-93ee-dbf5f3d503ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mxfnk"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: W1119 22:57:11.658981     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-58a7803e26d91dadbd3e204bcd9ae896a088341e927f6731344fc57d8f69161b WatchSource:0}: Error finding container 58a7803e26d91dadbd3e204bcd9ae896a088341e927f6731344fc57d8f69161b: Status 404 returned error can't find the container with id 58a7803e26d91dadbd3e204bcd9ae896a088341e927f6731344fc57d8f69161b
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: W1119 22:57:11.706804     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168 WatchSource:0}: Error finding container ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168: Status 404 returned error can't find the container with id ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168
	Nov 19 22:57:28 old-k8s-version-191961 kubelet[794]: I1119 22:57:28.085399     794 scope.go:117] "RemoveContainer" containerID="98b505204789766526dcab1f128451e826363cf3c3a53eb40bd1516bcf2a26c3"
	Nov 19 22:57:28 old-k8s-version-191961 kubelet[794]: I1119 22:57:28.123565     794 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mxfnk" podStartSLOduration=8.930762704 podCreationTimestamp="2025-11-19 22:57:11 +0000 UTC" firstStartedPulling="2025-11-19 22:57:11.66592631 +0000 UTC m=+17.993214362" lastFinishedPulling="2025-11-19 22:57:19.858665922 +0000 UTC m=+26.185953974" observedRunningTime="2025-11-19 22:57:20.046437276 +0000 UTC m=+26.373725328" watchObservedRunningTime="2025-11-19 22:57:28.123502316 +0000 UTC m=+34.450790368"
	Nov 19 22:57:29 old-k8s-version-191961 kubelet[794]: I1119 22:57:29.089126     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:29 old-k8s-version-191961 kubelet[794]: I1119 22:57:29.089516     794 scope.go:117] "RemoveContainer" containerID="98b505204789766526dcab1f128451e826363cf3c3a53eb40bd1516bcf2a26c3"
	Nov 19 22:57:29 old-k8s-version-191961 kubelet[794]: E1119 22:57:29.094900     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:30 old-k8s-version-191961 kubelet[794]: I1119 22:57:30.093122     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:30 old-k8s-version-191961 kubelet[794]: E1119 22:57:30.094047     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:31 old-k8s-version-191961 kubelet[794]: I1119 22:57:31.097648     794 scope.go:117] "RemoveContainer" containerID="b56f2b24a58be4362c56c2784b7237094d790158c7341ea0cadd8dff6a5cf241"
	Nov 19 22:57:31 old-k8s-version-191961 kubelet[794]: I1119 22:57:31.622775     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:31 old-k8s-version-191961 kubelet[794]: E1119 22:57:31.623605     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:42 old-k8s-version-191961 kubelet[794]: I1119 22:57:42.858683     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:43 old-k8s-version-191961 kubelet[794]: I1119 22:57:43.133836     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:43 old-k8s-version-191961 kubelet[794]: I1119 22:57:43.134785     794 scope.go:117] "RemoveContainer" containerID="1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8"
	Nov 19 22:57:43 old-k8s-version-191961 kubelet[794]: E1119 22:57:43.135212     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:51 old-k8s-version-191961 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:57:51 old-k8s-version-191961 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:57:51 old-k8s-version-191961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [15f172e1e9015f511d2f9658c286a7d2e124c2f8d7ce49a0480611b9338af010] <==
	2025/11/19 22:57:19 Starting overwatch
	2025/11/19 22:57:19 Using namespace: kubernetes-dashboard
	2025/11/19 22:57:19 Using in-cluster config to connect to apiserver
	2025/11/19 22:57:19 Using secret token for csrf signing
	2025/11/19 22:57:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:57:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:57:20 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 22:57:20 Generating JWE encryption key
	2025/11/19 22:57:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:57:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:57:21 Initializing JWE encryption key from synchronized object
	2025/11/19 22:57:21 Creating in-cluster Sidecar client
	2025/11/19 22:57:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:57:21 Serving insecurely on HTTP port: 9090
	2025/11/19 22:57:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7c81dcbf758f34246c6eb872955ba1903e0b6b3d9cf8cdf578aa6ba6198b72ac] <==
	I1119 22:57:31.198769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:57:31.227300       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:57:31.227429       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:57:48.656854       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:57:48.657005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b686f98b-4f98-4c94-964a-7a5f07bf7388", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-191961_2524a198-876e-41b9-b54d-a28ccb1d36d9 became leader
	I1119 22:57:48.657027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-191961_2524a198-876e-41b9-b54d-a28ccb1d36d9!
	I1119 22:57:48.757949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-191961_2524a198-876e-41b9-b54d-a28ccb1d36d9!
	
	
	==> storage-provisioner [b56f2b24a58be4362c56c2784b7237094d790158c7341ea0cadd8dff6a5cf241] <==
	I1119 22:57:00.566461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:57:30.568921       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191961 -n old-k8s-version-191961
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191961 -n old-k8s-version-191961: exit status 2 (391.501884ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-191961 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-191961
helpers_test.go:243: (dbg) docker inspect old-k8s-version-191961:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee",
	        "Created": "2025-11-19T22:55:13.430692279Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1058752,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:56:45.530168221Z",
	            "FinishedAt": "2025-11-19T22:56:44.534949289Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/hosts",
	        "LogPath": "/var/lib/docker/containers/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee-json.log",
	        "Name": "/old-k8s-version-191961",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-191961:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-191961",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee",
	                "LowerDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3feece3499bff28f92fb929400a5a5af8fcb9237d8613e9f4c1347ea9717edfd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-191961",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-191961/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-191961",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-191961",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-191961",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e2580d9d8bcb4dc74e6d7e4adaec49f42dfbf838f387c168b1846820ea4053a",
	            "SandboxKey": "/var/run/docker/netns/5e2580d9d8bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33851"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33852"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33853"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-191961": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:5b:58:65:22:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "47f03f83c3fe719b80c42f4da32b57adc2e9e8ee352f6eea7c164878ce0bc301",
	                    "EndpointID": "678b0e0a7035e7c3e8a6945ad3b99792c0c1943f1d76cda27bd634edf3fed170",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-191961",
	                        "e6ae989c9f99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961: exit status 2 (368.145386ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-191961 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-191961 logs -n 25: (1.476719717s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-334366                                                                                                                                                                                                                              │ cilium-334366             │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:50 UTC │
	│ start   │ -p force-systemd-env-860026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:50 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:54 UTC │
	│ delete  │ -p force-systemd-env-860026                                                                                                                                                                                                                   │ force-systemd-env-860026  │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:51 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:52 UTC │
	│ delete  │ -p kubernetes-upgrade-154655                                                                                                                                                                                                                  │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:54 UTC │
	│ start   │ -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ cert-options-110863 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ -p cert-options-110863 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ delete  │ -p cert-options-110863                                                                                                                                                                                                                        │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p old-k8s-version-191961 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                                                                                               │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:57:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:57:04.031783 1061187 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:57:04.031928 1061187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:57:04.031940 1061187 out.go:374] Setting ErrFile to fd 2...
	I1119 22:57:04.031947 1061187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:57:04.032256 1061187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:57:04.032677 1061187 out.go:368] Setting JSON to false
	I1119 22:57:04.033710 1061187 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16753,"bootTime":1763576271,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:57:04.033785 1061187 start.go:143] virtualization:  
	I1119 22:57:04.038853 1061187 out.go:179] * [no-preload-018508] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:57:04.042028 1061187 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:57:04.042079 1061187 notify.go:221] Checking for updates...
	I1119 22:57:04.048980 1061187 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:57:04.051980 1061187 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:57:04.054966 1061187 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:57:04.057869 1061187 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:57:04.060807 1061187 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:57:04.064300 1061187 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:57:04.065051 1061187 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:57:04.090371 1061187 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:57:04.090488 1061187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:57:04.156245 1061187 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:57:04.145851236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:57:04.156374 1061187 docker.go:319] overlay module found
	I1119 22:57:04.159525 1061187 out.go:179] * Using the docker driver based on existing profile
	I1119 22:57:04.162482 1061187 start.go:309] selected driver: docker
	I1119 22:57:04.162503 1061187 start.go:930] validating driver "docker" against &{Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:57:04.162602 1061187 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:57:04.163449 1061187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:57:04.217966 1061187 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:57:04.20883729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:57:04.218316 1061187 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:57:04.218350 1061187 cni.go:84] Creating CNI manager for ""
	I1119 22:57:04.218408 1061187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:57:04.218452 1061187 start.go:353] cluster config:
	{Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:57:04.223350 1061187 out.go:179] * Starting "no-preload-018508" primary control-plane node in "no-preload-018508" cluster
	I1119 22:57:04.226177 1061187 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:57:04.229180 1061187 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:57:04.232113 1061187 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:57:04.232214 1061187 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:57:04.232263 1061187 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/config.json ...
	I1119 22:57:04.232625 1061187 cache.go:107] acquiring lock: {Name:mk180e474f04af563cbfcf1e6f1ac0d968064e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232721 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 22:57:04.232736 1061187 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.625µs
	I1119 22:57:04.232751 1061187 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 22:57:04.232764 1061187 cache.go:107] acquiring lock: {Name:mk2b339ab9bb06155cf46e99d17bcad78cd42ce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232801 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 22:57:04.232810 1061187 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 48.033µs
	I1119 22:57:04.232817 1061187 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 22:57:04.232827 1061187 cache.go:107] acquiring lock: {Name:mkeb8164ef4491f0dac349eed28d827e1ab20310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232859 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 22:57:04.232868 1061187 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.281µs
	I1119 22:57:04.232875 1061187 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 22:57:04.232884 1061187 cache.go:107] acquiring lock: {Name:mk60a33fac3c62a01332ec72da7be7d237eebaf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232915 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 22:57:04.232924 1061187 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 41.321µs
	I1119 22:57:04.232931 1061187 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 22:57:04.232945 1061187 cache.go:107] acquiring lock: {Name:mkf092cd9edaf9fd2c691350815b05d694be6ba4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.232971 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 22:57:04.232980 1061187 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.191µs
	I1119 22:57:04.232986 1061187 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 22:57:04.232995 1061187 cache.go:107] acquiring lock: {Name:mk202931c6624db071b2edd07b2fea5bfea95f34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.233025 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1119 22:57:04.233034 1061187 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.934µs
	I1119 22:57:04.233041 1061187 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 22:57:04.233049 1061187 cache.go:107] acquiring lock: {Name:mk29595f21458d904bc2d24173d38f20affcf328 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.233079 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 22:57:04.233088 1061187 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.582µs
	I1119 22:57:04.233138 1061187 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 22:57:04.233162 1061187 cache.go:107] acquiring lock: {Name:mk5ba73f1f86578edab04675b64317e89203f7a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.233229 1061187 cache.go:115] /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 22:57:04.233239 1061187 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 89.74µs
	I1119 22:57:04.233245 1061187 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 22:57:04.233252 1061187 cache.go:87] Successfully saved all images to host disk.
	I1119 22:57:04.253662 1061187 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:57:04.253682 1061187 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:57:04.253694 1061187 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:57:04.253716 1061187 start.go:360] acquireMachinesLock for no-preload-018508: {Name:mk5707a3ba7045dab1a444980a59ede7567f2c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:57:04.253766 1061187 start.go:364] duration metric: took 34.478µs to acquireMachinesLock for "no-preload-018508"
	I1119 22:57:04.253788 1061187 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:57:04.253794 1061187 fix.go:54] fixHost starting: 
	I1119 22:57:04.254047 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:04.284745 1061187 fix.go:112] recreateIfNeeded on no-preload-018508: state=Stopped err=<nil>
	W1119 22:57:04.284782 1061187 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:57:01.276894 1058620 addons.go:515] duration metric: took 6.669515044s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1119 22:57:01.280202 1058620 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:57:01.281679 1058620 api_server.go:141] control plane version: v1.28.0
	I1119 22:57:01.281710 1058620 api_server.go:131] duration metric: took 13.850422ms to wait for apiserver health ...
	I1119 22:57:01.281721 1058620 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:57:01.286048 1058620 system_pods.go:59] 8 kube-system pods found
	I1119 22:57:01.286086 1058620 system_pods.go:61] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:01.286098 1058620 system_pods.go:61] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:01.286105 1058620 system_pods.go:61] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:57:01.286114 1058620 system_pods.go:61] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:01.286121 1058620 system_pods.go:61] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:01.286141 1058620 system_pods.go:61] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:57:01.286148 1058620 system_pods.go:61] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:01.286153 1058620 system_pods.go:61] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Running
	I1119 22:57:01.286170 1058620 system_pods.go:74] duration metric: took 4.442116ms to wait for pod list to return data ...
	I1119 22:57:01.286179 1058620 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:57:01.289357 1058620 default_sa.go:45] found service account: "default"
	I1119 22:57:01.289385 1058620 default_sa.go:55] duration metric: took 3.200048ms for default service account to be created ...
	I1119 22:57:01.289396 1058620 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:57:01.293351 1058620 system_pods.go:86] 8 kube-system pods found
	I1119 22:57:01.293386 1058620 system_pods.go:89] "coredns-5dd5756b68-sf6gl" [a5d9076c-6dc5-4069-8b3c-3cd6f314a341] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:01.293399 1058620 system_pods.go:89] "etcd-old-k8s-version-191961" [c76edf36-f4df-470e-9a92-0fca61f2e76f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:01.293405 1058620 system_pods.go:89] "kindnet-dtpd4" [e5d20ee8-59fe-46cb-889a-5fdeff81b3a4] Running
	I1119 22:57:01.293412 1058620 system_pods.go:89] "kube-apiserver-old-k8s-version-191961" [0e2b6b97-bc57-4605-b0e9-848cc4a3f45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:01.293419 1058620 system_pods.go:89] "kube-controller-manager-old-k8s-version-191961" [4932dbee-a0dd-4dad-a082-593cd14a705c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:01.293429 1058620 system_pods.go:89] "kube-proxy-rkdfn" [89be932c-52ba-4d29-8aec-9dd84268d731] Running
	I1119 22:57:01.293436 1058620 system_pods.go:89] "kube-scheduler-old-k8s-version-191961" [e583b977-0dc7-4315-a2ab-038999c554ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:01.293448 1058620 system_pods.go:89] "storage-provisioner" [d53ec514-54c2-484c-abbb-f57fb0107bb1] Running
	I1119 22:57:01.293456 1058620 system_pods.go:126] duration metric: took 4.054823ms to wait for k8s-apps to be running ...
	I1119 22:57:01.293465 1058620 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:57:01.293521 1058620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:57:01.309255 1058620 system_svc.go:56] duration metric: took 15.77983ms WaitForService to wait for kubelet
	I1119 22:57:01.309336 1058620 kubeadm.go:587] duration metric: took 6.7024287s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:57:01.309371 1058620 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:57:01.313566 1058620 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:57:01.313646 1058620 node_conditions.go:123] node cpu capacity is 2
	I1119 22:57:01.313675 1058620 node_conditions.go:105] duration metric: took 4.264228ms to run NodePressure ...
	I1119 22:57:01.313720 1058620 start.go:242] waiting for startup goroutines ...
	I1119 22:57:01.313747 1058620 start.go:247] waiting for cluster config update ...
	I1119 22:57:01.313774 1058620 start.go:256] writing updated cluster config ...
	I1119 22:57:01.314109 1058620 ssh_runner.go:195] Run: rm -f paused
	I1119 22:57:01.319408 1058620 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:57:01.324566 1058620 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-sf6gl" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:57:03.331150 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:04.288150 1061187 out.go:252] * Restarting existing docker container for "no-preload-018508" ...
	I1119 22:57:04.288246 1061187 cli_runner.go:164] Run: docker start no-preload-018508
	I1119 22:57:04.547827 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:04.574469 1061187 kic.go:430] container "no-preload-018508" state is running.
	I1119 22:57:04.574913 1061187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:57:04.599712 1061187 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/config.json ...
	I1119 22:57:04.599937 1061187 machine.go:94] provisionDockerMachine start ...
	I1119 22:57:04.600004 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:04.622473 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:04.622830 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:04.622840 1061187 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:57:04.623475 1061187 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38968->127.0.0.1:33856: read: connection reset by peer
	I1119 22:57:07.770624 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-018508
	
	I1119 22:57:07.770650 1061187 ubuntu.go:182] provisioning hostname "no-preload-018508"
	I1119 22:57:07.770712 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:07.787857 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:07.788199 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:07.788217 1061187 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-018508 && echo "no-preload-018508" | sudo tee /etc/hostname
	I1119 22:57:07.941527 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-018508
	
	I1119 22:57:07.941628 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:07.959380 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:07.959687 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:07.959710 1061187 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-018508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-018508/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-018508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:57:08.107538 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:57:08.107564 1061187 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:57:08.107589 1061187 ubuntu.go:190] setting up certificates
	I1119 22:57:08.107600 1061187 provision.go:84] configureAuth start
	I1119 22:57:08.107662 1061187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:57:08.126156 1061187 provision.go:143] copyHostCerts
	I1119 22:57:08.126267 1061187 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:57:08.126290 1061187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:57:08.126396 1061187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:57:08.126557 1061187 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:57:08.126570 1061187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:57:08.126607 1061187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:57:08.126718 1061187 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:57:08.126729 1061187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:57:08.126763 1061187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:57:08.126845 1061187 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.no-preload-018508 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-018508]
	I1119 22:57:08.805102 1061187 provision.go:177] copyRemoteCerts
	I1119 22:57:08.805175 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:57:08.805216 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:08.826566 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:08.931363 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:57:08.951029 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:57:08.971638 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:57:08.990851 1061187 provision.go:87] duration metric: took 883.228354ms to configureAuth
	I1119 22:57:08.990901 1061187 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:57:08.991116 1061187 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:57:08.991234 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.011138 1061187 main.go:143] libmachine: Using SSH client type: native
	I1119 22:57:09.011518 1061187 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33856 <nil> <nil>}
	I1119 22:57:09.011558 1061187 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1119 22:57:05.830911 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:07.831041 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:09.386190 1061187 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:57:09.386256 1061187 machine.go:97] duration metric: took 4.786306358s to provisionDockerMachine
	I1119 22:57:09.386285 1061187 start.go:293] postStartSetup for "no-preload-018508" (driver="docker")
	I1119 22:57:09.386330 1061187 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:57:09.386440 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:57:09.386503 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.410059 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.515009 1061187 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:57:09.518358 1061187 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:57:09.518386 1061187 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:57:09.518398 1061187 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:57:09.518449 1061187 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:57:09.518537 1061187 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:57:09.518651 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:57:09.526220 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:57:09.544684 1061187 start.go:296] duration metric: took 158.365953ms for postStartSetup
	I1119 22:57:09.544763 1061187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:57:09.544822 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.562260 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.660029 1061187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:57:09.664958 1061187 fix.go:56] duration metric: took 5.41115647s for fixHost
	I1119 22:57:09.664982 1061187 start.go:83] releasing machines lock for "no-preload-018508", held for 5.411207859s
	I1119 22:57:09.665058 1061187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-018508
	I1119 22:57:09.682907 1061187 ssh_runner.go:195] Run: cat /version.json
	I1119 22:57:09.682963 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.683055 1061187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:57:09.683124 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:09.708164 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.715225 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:09.806406 1061187 ssh_runner.go:195] Run: systemctl --version
	I1119 22:57:09.904624 1061187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:57:09.941999 1061187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:57:09.946308 1061187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:57:09.946389 1061187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:57:09.954397 1061187 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:57:09.954419 1061187 start.go:496] detecting cgroup driver to use...
	I1119 22:57:09.954451 1061187 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:57:09.954498 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:57:09.971537 1061187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:57:09.984558 1061187 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:57:09.984670 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:57:10.000866 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:57:10.017965 1061187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:57:10.148189 1061187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:57:10.273950 1061187 docker.go:234] disabling docker service ...
	I1119 22:57:10.274072 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:57:10.288999 1061187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:57:10.302390 1061187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:57:10.418686 1061187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:57:10.544989 1061187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:57:10.559056 1061187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:57:10.573714 1061187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:57:10.573804 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.583803 1061187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:57:10.583871 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.593246 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.602563 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.611649 1061187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:57:10.620223 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.629541 1061187 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.638320 1061187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:57:10.647257 1061187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:57:10.655056 1061187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:57:10.662503 1061187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:57:10.788625 1061187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:57:10.969669 1061187 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:57:10.969743 1061187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:57:10.974557 1061187 start.go:564] Will wait 60s for crictl version
	I1119 22:57:10.974631 1061187 ssh_runner.go:195] Run: which crictl
	I1119 22:57:10.978427 1061187 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:57:11.007254 1061187 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:57:11.007371 1061187 ssh_runner.go:195] Run: crio --version
	I1119 22:57:11.051672 1061187 ssh_runner.go:195] Run: crio --version
	I1119 22:57:11.123753 1061187 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:57:11.126625 1061187 cli_runner.go:164] Run: docker network inspect no-preload-018508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:57:11.144816 1061187 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:57:11.151118 1061187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:57:11.166649 1061187 kubeadm.go:884] updating cluster {Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:57:11.166784 1061187 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:57:11.166834 1061187 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:57:11.231528 1061187 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:57:11.231553 1061187 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:57:11.231561 1061187 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:57:11.231657 1061187 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-018508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:57:11.231740 1061187 ssh_runner.go:195] Run: crio config
	I1119 22:57:11.328927 1061187 cni.go:84] Creating CNI manager for ""
	I1119 22:57:11.328954 1061187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:57:11.328971 1061187 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:57:11.328994 1061187 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-018508 NodeName:no-preload-018508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:57:11.329129 1061187 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-018508"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:57:11.329214 1061187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:57:11.339279 1061187 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:57:11.339357 1061187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:57:11.349159 1061187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 22:57:11.365849 1061187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:57:11.385466 1061187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:57:11.409315 1061187 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:57:11.412934 1061187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:57:11.427670 1061187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:57:11.581123 1061187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:57:11.598425 1061187 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508 for IP: 192.168.85.2
	I1119 22:57:11.598447 1061187 certs.go:195] generating shared ca certs ...
	I1119 22:57:11.598463 1061187 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:11.598601 1061187 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 22:57:11.598653 1061187 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 22:57:11.598666 1061187 certs.go:257] generating profile certs ...
	I1119 22:57:11.598749 1061187 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.key
	I1119 22:57:11.598819 1061187 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key.7c4af07e
	I1119 22:57:11.598860 1061187 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key
	I1119 22:57:11.599002 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 22:57:11.599041 1061187 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 22:57:11.599054 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:57:11.599083 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:57:11.599110 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:57:11.599141 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 22:57:11.599185 1061187 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:57:11.599781 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:57:11.625510 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:57:11.660077 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:57:11.709464 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 22:57:11.751942 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:57:11.770186 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:57:11.795339 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:57:11.829029 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:57:11.853542 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 22:57:11.889106 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:57:11.910851 1061187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 22:57:11.933584 1061187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:57:11.947766 1061187 ssh_runner.go:195] Run: openssl version
	I1119 22:57:11.956358 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 22:57:11.965742 1061187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 22:57:11.971749 1061187 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 22:57:11.971832 1061187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 22:57:12.020082 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:57:12.030140 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:57:12.041442 1061187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:57:12.046324 1061187 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:57:12.046404 1061187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:57:12.100852 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:57:12.109930 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 22:57:12.119574 1061187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 22:57:12.123922 1061187 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 22:57:12.124046 1061187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 22:57:12.169044 1061187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 22:57:12.178144 1061187 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:57:12.197626 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:57:12.248888 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:57:12.342860 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:57:12.441384 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:57:12.553705 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:57:12.639403 1061187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:57:12.864893 1061187 kubeadm.go:401] StartCluster: {Name:no-preload-018508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-018508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:57:12.865018 1061187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:57:12.865102 1061187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:57:13.030532 1061187 cri.go:89] found id: "8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d"
	I1119 22:57:13.030690 1061187 cri.go:89] found id: "1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08"
	I1119 22:57:13.030713 1061187 cri.go:89] found id: "0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f"
	I1119 22:57:13.030762 1061187 cri.go:89] found id: "0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1"
	I1119 22:57:13.030783 1061187 cri.go:89] found id: ""
	I1119 22:57:13.030906 1061187 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:57:13.079279 1061187 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:57:13Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:57:13.079445 1061187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:57:13.106678 1061187 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:57:13.106736 1061187 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:57:13.106841 1061187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:57:13.131899 1061187 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:57:13.132590 1061187 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-018508" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:57:13.132921 1061187 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-860325/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-018508" cluster setting kubeconfig missing "no-preload-018508" context setting]
	I1119 22:57:13.133483 1061187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:13.135298 1061187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:57:13.149458 1061187 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:57:13.149538 1061187 kubeadm.go:602] duration metric: took 42.771543ms to restartPrimaryControlPlane
	I1119 22:57:13.149561 1061187 kubeadm.go:403] duration metric: took 284.678413ms to StartCluster
	I1119 22:57:13.149606 1061187 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:13.149703 1061187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:57:13.150804 1061187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:57:13.151100 1061187 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:57:13.151662 1061187 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:57:13.151608 1061187 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:57:13.151840 1061187 addons.go:70] Setting storage-provisioner=true in profile "no-preload-018508"
	I1119 22:57:13.151882 1061187 addons.go:239] Setting addon storage-provisioner=true in "no-preload-018508"
	W1119 22:57:13.151915 1061187 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:57:13.152013 1061187 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:57:13.151845 1061187 addons.go:70] Setting dashboard=true in profile "no-preload-018508"
	I1119 22:57:13.152135 1061187 addons.go:239] Setting addon dashboard=true in "no-preload-018508"
	W1119 22:57:13.152143 1061187 addons.go:248] addon dashboard should already be in state true
	I1119 22:57:13.152168 1061187 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:57:13.152597 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.152755 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.151855 1061187 addons.go:70] Setting default-storageclass=true in profile "no-preload-018508"
	I1119 22:57:13.153393 1061187 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-018508"
	I1119 22:57:13.153657 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.157252 1061187 out.go:179] * Verifying Kubernetes components...
	I1119 22:57:13.163391 1061187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:57:13.195731 1061187 addons.go:239] Setting addon default-storageclass=true in "no-preload-018508"
	W1119 22:57:13.195755 1061187 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:57:13.195779 1061187 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:57:13.196184 1061187 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:57:13.214071 1061187 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:57:13.217021 1061187 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:57:13.217043 1061187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:57:13.217111 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:13.221819 1061187 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:57:13.229130 1061187 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:57:13.232866 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:57:13.232893 1061187 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:57:13.232976 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:13.250247 1061187 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:57:13.250272 1061187 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:57:13.250335 1061187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:57:13.255385 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:13.292560 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:13.298979 1061187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:57:13.629899 1061187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:57:13.714280 1061187 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:57:13.718667 1061187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:57:13.738121 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:57:13.738184 1061187 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:57:13.835108 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:57:13.835173 1061187 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:57:13.939435 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:57:13.939502 1061187 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1119 22:57:10.331375 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:12.837000 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:14.062539 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:57:14.062573 1061187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:57:14.201324 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:57:14.201352 1061187 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:57:14.241353 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:57:14.241425 1061187 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:57:14.287522 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:57:14.287597 1061187 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:57:14.330201 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:57:14.330272 1061187 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:57:14.355070 1061187 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:57:14.355151 1061187 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:57:14.392506 1061187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1119 22:57:15.333834 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:17.334585 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:19.337505 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:23.851447 1061187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.221512783s)
	I1119 22:57:23.851506 1061187 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.137163535s)
	I1119 22:57:23.851542 1061187 node_ready.go:35] waiting up to 6m0s for node "no-preload-018508" to be "Ready" ...
	I1119 22:57:23.851844 1061187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.133101385s)
	I1119 22:57:23.882583 1061187 node_ready.go:49] node "no-preload-018508" is "Ready"
	I1119 22:57:23.882825 1061187 node_ready.go:38] duration metric: took 31.248524ms for node "no-preload-018508" to be "Ready" ...
	I1119 22:57:23.882959 1061187 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:57:23.883069 1061187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:57:23.898117 1061187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.505521144s)
	I1119 22:57:23.901659 1061187 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-018508 addons enable metrics-server
	
	I1119 22:57:23.904886 1061187 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1119 22:57:23.907967 1061187 addons.go:515] duration metric: took 10.756350881s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1119 22:57:23.928641 1061187 api_server.go:72] duration metric: took 10.777483975s to wait for apiserver process to appear ...
	I1119 22:57:23.928710 1061187 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:57:23.928746 1061187 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:57:23.943766 1061187 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:57:23.945283 1061187 api_server.go:141] control plane version: v1.34.1
	I1119 22:57:23.945358 1061187 api_server.go:131] duration metric: took 16.624401ms to wait for apiserver health ...
	I1119 22:57:23.945383 1061187 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:57:23.955062 1061187 system_pods.go:59] 8 kube-system pods found
	I1119 22:57:23.955147 1061187 system_pods.go:61] "coredns-66bc5c9577-rxhmf" [71cc8e54-484a-403e-bda7-a4e70390d4c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:23.955183 1061187 system_pods.go:61] "etcd-no-preload-018508" [8f8923b4-736d-447c-8130-1615741a5ca8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:23.955221 1061187 system_pods.go:61] "kindnet-2n4sq" [0c7558d8-6110-4c54-851c-23315a8a713c] Running
	I1119 22:57:23.955260 1061187 system_pods.go:61] "kube-apiserver-no-preload-018508" [3c2ba7c2-c96b-4683-9e19-d6dc92c78be7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:23.955286 1061187 system_pods.go:61] "kube-controller-manager-no-preload-018508" [72bd7185-6053-4ea8-ad99-8f6bebd83526] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:23.955315 1061187 system_pods.go:61] "kube-proxy-pn4pw" [2162b2bd-a5f2-4538-8a48-79a0246a58eb] Running
	I1119 22:57:23.955346 1061187 system_pods.go:61] "kube-scheduler-no-preload-018508" [a65aee10-0318-4615-aaba-4e411683dd55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:23.955372 1061187 system_pods.go:61] "storage-provisioner" [6c83bdb2-ec25-44e1-ae35-77e4dea28165] Running
	I1119 22:57:23.955398 1061187 system_pods.go:74] duration metric: took 9.992335ms to wait for pod list to return data ...
	I1119 22:57:23.955421 1061187 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:57:23.963716 1061187 default_sa.go:45] found service account: "default"
	I1119 22:57:23.963783 1061187 default_sa.go:55] duration metric: took 8.330006ms for default service account to be created ...
	I1119 22:57:23.963807 1061187 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:57:23.969465 1061187 system_pods.go:86] 8 kube-system pods found
	I1119 22:57:23.969547 1061187 system_pods.go:89] "coredns-66bc5c9577-rxhmf" [71cc8e54-484a-403e-bda7-a4e70390d4c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:57:23.969572 1061187 system_pods.go:89] "etcd-no-preload-018508" [8f8923b4-736d-447c-8130-1615741a5ca8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:57:23.969612 1061187 system_pods.go:89] "kindnet-2n4sq" [0c7558d8-6110-4c54-851c-23315a8a713c] Running
	I1119 22:57:23.969642 1061187 system_pods.go:89] "kube-apiserver-no-preload-018508" [3c2ba7c2-c96b-4683-9e19-d6dc92c78be7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:57:23.969669 1061187 system_pods.go:89] "kube-controller-manager-no-preload-018508" [72bd7185-6053-4ea8-ad99-8f6bebd83526] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:57:23.969695 1061187 system_pods.go:89] "kube-proxy-pn4pw" [2162b2bd-a5f2-4538-8a48-79a0246a58eb] Running
	I1119 22:57:23.969728 1061187 system_pods.go:89] "kube-scheduler-no-preload-018508" [a65aee10-0318-4615-aaba-4e411683dd55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:57:23.969754 1061187 system_pods.go:89] "storage-provisioner" [6c83bdb2-ec25-44e1-ae35-77e4dea28165] Running
	I1119 22:57:23.969781 1061187 system_pods.go:126] duration metric: took 5.950615ms to wait for k8s-apps to be running ...
	I1119 22:57:23.969808 1061187 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:57:23.969889 1061187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:57:23.992462 1061187 system_svc.go:56] duration metric: took 22.646262ms WaitForService to wait for kubelet
	I1119 22:57:23.992494 1061187 kubeadm.go:587] duration metric: took 10.841342411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:57:23.992522 1061187 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:57:23.997356 1061187 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:57:23.997390 1061187 node_conditions.go:123] node cpu capacity is 2
	I1119 22:57:23.997403 1061187 node_conditions.go:105] duration metric: took 4.874727ms to run NodePressure ...
	I1119 22:57:23.997430 1061187 start.go:242] waiting for startup goroutines ...
	I1119 22:57:23.997441 1061187 start.go:247] waiting for cluster config update ...
	I1119 22:57:23.997459 1061187 start.go:256] writing updated cluster config ...
	I1119 22:57:23.997795 1061187 ssh_runner.go:195] Run: rm -f paused
	I1119 22:57:24.002699 1061187 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:57:24.009232 1061187 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rxhmf" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 22:57:21.350797 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:23.832992 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:26.016733 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:28.515866 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:26.332016 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:28.336766 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:31.027488 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:33.515870 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:30.830716 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:32.831069 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:34.831548 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	W1119 22:57:35.518012 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:38.016484 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:37.331586 1058620 pod_ready.go:104] pod "coredns-5dd5756b68-sf6gl" is not "Ready", error: <nil>
	I1119 22:57:37.830571 1058620 pod_ready.go:94] pod "coredns-5dd5756b68-sf6gl" is "Ready"
	I1119 22:57:37.830594 1058620 pod_ready.go:86] duration metric: took 36.505956168s for pod "coredns-5dd5756b68-sf6gl" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.833866 1058620 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.838819 1058620 pod_ready.go:94] pod "etcd-old-k8s-version-191961" is "Ready"
	I1119 22:57:37.838849 1058620 pod_ready.go:86] duration metric: took 4.956147ms for pod "etcd-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.841873 1058620 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.848192 1058620 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-191961" is "Ready"
	I1119 22:57:37.848220 1058620 pod_ready.go:86] duration metric: took 6.324912ms for pod "kube-apiserver-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:37.851336 1058620 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.029464 1058620 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-191961" is "Ready"
	I1119 22:57:38.029545 1058620 pod_ready.go:86] duration metric: took 178.185891ms for pod "kube-controller-manager-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.229396 1058620 pod_ready.go:83] waiting for pod "kube-proxy-rkdfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.628053 1058620 pod_ready.go:94] pod "kube-proxy-rkdfn" is "Ready"
	I1119 22:57:38.628080 1058620 pod_ready.go:86] duration metric: took 398.658899ms for pod "kube-proxy-rkdfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:38.829389 1058620 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:39.228679 1058620 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-191961" is "Ready"
	I1119 22:57:39.228710 1058620 pod_ready.go:86] duration metric: took 399.295902ms for pod "kube-scheduler-old-k8s-version-191961" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:57:39.228723 1058620 pod_ready.go:40] duration metric: took 37.909221709s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:57:39.285450 1058620 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1119 22:57:39.288759 1058620 out.go:203] 
	W1119 22:57:39.291794 1058620 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:57:39.294693 1058620 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:57:39.297633 1058620 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-191961" cluster and "default" namespace by default
	W1119 22:57:40.026290 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:42.515278 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:45.029634 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:47.515581 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:50.018430 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	W1119 22:57:52.516100 1061187 pod_ready.go:104] pod "coredns-66bc5c9577-rxhmf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.933014196Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.937069118Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.937104835Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.93712717Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.941313433Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.941356141Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.941383193Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.945713933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.945750938Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.945802598Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.950325799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:57:40 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:40.950364404Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.859432713Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b74aa6b-3718-4a9c-b7d1-8571e1968f4d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.860623647Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d7f24f87-2562-475a-84b2-2378c36c8bd0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.861959657Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper" id=df1b22ef-8001-47cb-9dc0-8ff6dfde7cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.86208408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.870405003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.871167471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.891954341Z" level=info msg="Created container 1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper" id=df1b22ef-8001-47cb-9dc0-8ff6dfde7cb7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.89448487Z" level=info msg="Starting container: 1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8" id=09db5cb4-af55-4da1-85b2-f37d496fb249 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:57:42 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:42.896852773Z" level=info msg="Started container" PID=1753 containerID=1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper id=09db5cb4-af55-4da1-85b2-f37d496fb249 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168
	Nov 19 22:57:42 old-k8s-version-191961 conmon[1751]: conmon 1e677e29cd3752fa7ed7 <ninfo>: container 1753 exited with status 1
	Nov 19 22:57:43 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:43.138804558Z" level=info msg="Removing container: f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4" id=7d0dbfe5-c287-4a3d-b399-5bf64fa17fd3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:57:43 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:43.147460376Z" level=info msg="Error loading conmon cgroup of container f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4: cgroup deleted" id=7d0dbfe5-c287-4a3d-b399-5bf64fa17fd3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:57:43 old-k8s-version-191961 crio[663]: time="2025-11-19T22:57:43.152684448Z" level=info msg="Removed container f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw/dashboard-metrics-scraper" id=7d0dbfe5-c287-4a3d-b399-5bf64fa17fd3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1e677e29cd375       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   2                   ad7030e403338       dashboard-metrics-scraper-5f989dc9cf-4f2bw       kubernetes-dashboard
	7c81dcbf758f3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   0a11f185687e1       storage-provisioner                              kube-system
	15f172e1e9015       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   58a7803e26d91       kubernetes-dashboard-8694d4445c-mxfnk            kubernetes-dashboard
	66d481d552c0c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   26b92c6d51781       coredns-5dd5756b68-sf6gl                         kube-system
	32c78aa0c20f9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   31aeaf1cc12fd       busybox                                          default
	b56f2b24a58be       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   0a11f185687e1       storage-provisioner                              kube-system
	48938cdfbab01       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   7a9f55e4d3b94       kindnet-dtpd4                                    kube-system
	991962c053d8c       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   aa85be79d32fe       kube-proxy-rkdfn                                 kube-system
	40ffa36b8db7d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   473147fd51446       kube-controller-manager-old-k8s-version-191961   kube-system
	dc473b93b033b       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   4b3bd4fb25b14       kube-scheduler-old-k8s-version-191961            kube-system
	d5661c576bd48       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   736208b20b905       etcd-old-k8s-version-191961                      kube-system
	a0adb72d131f7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   74d992276ebe5       kube-apiserver-old-k8s-version-191961            kube-system
	
	
	==> coredns [66d481d552c0c92d846bfcde284f4bbca87eb63d685d51037b2d199683b3b8b3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51358 - 13150 "HINFO IN 3754183133172108303.7751547629821104851. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032745549s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-191961
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-191961
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-191961
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_55_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:55:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-191961
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:55:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:57:29 +0000   Wed, 19 Nov 2025 22:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-191961
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                a586ad19-3112-4f7e-a794-67583869230e
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-sf6gl                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-old-k8s-version-191961                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-dtpd4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-191961             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-191961    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-rkdfn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-191961             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4f2bw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mxfnk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x8 over 2m19s)  kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                   node-controller  Node old-k8s-version-191961 event: Registered Node old-k8s-version-191961 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-191961 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-191961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-191961 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-191961 event: Registered Node old-k8s-version-191961 in Controller
	
	
	==> dmesg <==
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d5661c576bd48805df3310a42259b25f3d6358219721fabef991e12173d9f4d0] <==
	{"level":"info","ts":"2025-11-19T22:56:54.559204Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:56:54.559212Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:56:54.559424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T22:56:54.55948Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T22:56:54.559558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:56:54.559584Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:56:54.567271Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T22:56:54.571416Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T22:56:54.571451Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T22:56:54.571185Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:56:54.571478Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:56:56.134896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T22:56:56.135011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:56:56.135067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:56:56.135107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.135141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.135181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.135227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:56:56.137645Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-191961 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:56:56.137827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:56:56.138911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-19T22:56:56.146894Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:56:56.147555Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:56:56.14759Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:56:56.148076Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:57:56 up  4:40,  0 user,  load average: 4.14, 3.19, 2.50
	Linux old-k8s-version-191961 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [48938cdfbab01455db270b6a2524cd6415ce23a3701805784912fac3f64e75b3] <==
	I1119 22:57:00.697707       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:57:00.698031       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:57:00.698196       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:57:00.698209       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:57:00.698223       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:57:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:57:00.925691       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:57:00.925710       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:57:00.925719       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:57:00.926024       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:57:30.927538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:57:30.927650       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 22:57:30.927807       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:57:30.930250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1119 22:57:32.426393       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:57:32.426434       1 metrics.go:72] Registering metrics
	I1119 22:57:32.426487       1 controller.go:711] "Syncing nftables rules"
	I1119 22:57:40.926053       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:57:40.926769       1 main.go:301] handling current node
	I1119 22:57:50.931778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:57:50.931809       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a0adb72d131f7a3e37e9659ed410147e226ae0a9c56505fa2588b71340e71ecd] <==
	I1119 22:56:58.601317       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 22:56:58.941640       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:56:58.991823       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:56:59.001068       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 22:56:59.001163       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 22:56:59.008144       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 22:56:59.010559       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:56:59.018815       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:56:59.027088       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 22:56:59.027864       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 22:56:59.028669       1 aggregator.go:166] initial CRD sync complete...
	I1119 22:56:59.028725       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:56:59.028755       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:56:59.028791       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:56:59.594751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:57:01.065968       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:57:01.117356       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:57:01.145161       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:57:01.155912       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:57:01.169727       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:57:01.233903       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.236.52"}
	I1119 22:57:01.259580       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.23.152"}
	I1119 22:57:11.194481       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:57:11.260267       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:57:11.337748       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [40ffa36b8db7d4bddf7eec2be93374f85c68b4f8475b1ca8f95a6e259bc4b4ec] <==
	I1119 22:57:11.222481       1 shared_informer.go:318] Caches are synced for endpoint
	I1119 22:57:11.231845       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1119 22:57:11.255357       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:57:11.265127       1 shared_informer.go:318] Caches are synced for persistent volume
	I1119 22:57:11.276784       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mxfnk"
	I1119 22:57:11.276948       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	I1119 22:57:11.306330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.31081ms"
	I1119 22:57:11.307043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.868964ms"
	I1119 22:57:11.414271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="107.141732ms"
	I1119 22:57:11.414402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.39µs"
	I1119 22:57:11.418372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.864µs"
	I1119 22:57:11.418607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.154035ms"
	I1119 22:57:11.448881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="30.244814ms"
	I1119 22:57:11.448973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.856µs"
	I1119 22:57:11.702494       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:57:11.702521       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:57:11.722211       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:57:20.081788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.496398ms"
	I1119 22:57:20.082029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.942µs"
	I1119 22:57:28.119132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.413µs"
	I1119 22:57:29.122211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.684µs"
	I1119 22:57:30.126853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.108µs"
	I1119 22:57:37.586428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.328929ms"
	I1119 22:57:37.586810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.924µs"
	I1119 22:57:43.157002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.809µs"
	
	
	==> kube-proxy [991962c053d8c37f8fb7d52404d9d0e4a26ce40375bc807670a69e5e309d0e10] <==
	I1119 22:57:00.799989       1 server_others.go:69] "Using iptables proxy"
	I1119 22:57:00.823013       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:57:00.869307       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:57:00.904200       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:57:00.904248       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:57:00.904257       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:57:00.904285       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:57:00.904527       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:57:00.904543       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:57:00.915899       1 config.go:188] "Starting service config controller"
	I1119 22:57:00.917004       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:57:00.917111       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:57:00.917143       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:57:00.918041       1 config.go:315] "Starting node config controller"
	I1119 22:57:00.920244       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:57:01.019829       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:57:01.019929       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:57:01.020363       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [dc473b93b033b07c30f493568e843909bf72c0923b3edcfa7b790acdcd5d2734] <==
	I1119 22:56:58.151820       1 serving.go:348] Generated self-signed cert in-memory
	I1119 22:56:59.411338       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 22:56:59.411447       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:56:59.416304       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 22:56:59.416511       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1119 22:56:59.416558       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1119 22:56:59.416606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 22:56:59.418331       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:56:59.423693       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 22:56:59.422983       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:56:59.424072       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:56:59.517161       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1119 22:56:59.524680       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:56:59.524742       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.300363     794 topology_manager.go:215] "Topology Admit Handler" podUID="4bdd98f3-4299-4444-93ee-dbf5f3d503ed" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-mxfnk"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.318668     794 topology_manager.go:215] "Topology Admit Handler" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428279     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/320ae2e5-a523-4c14-9c8d-277f0d7218a2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4f2bw\" (UID: \"320ae2e5-a523-4c14-9c8d-277f0d7218a2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428502     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2sm7\" (UniqueName: \"kubernetes.io/projected/320ae2e5-a523-4c14-9c8d-277f0d7218a2-kube-api-access-b2sm7\") pod \"dashboard-metrics-scraper-5f989dc9cf-4f2bw\" (UID: \"320ae2e5-a523-4c14-9c8d-277f0d7218a2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428697     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bdd98f3-4299-4444-93ee-dbf5f3d503ed-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mxfnk\" (UID: \"4bdd98f3-4299-4444-93ee-dbf5f3d503ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mxfnk"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: I1119 22:57:11.428802     794 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvxl\" (UniqueName: \"kubernetes.io/projected/4bdd98f3-4299-4444-93ee-dbf5f3d503ed-kube-api-access-jnvxl\") pod \"kubernetes-dashboard-8694d4445c-mxfnk\" (UID: \"4bdd98f3-4299-4444-93ee-dbf5f3d503ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mxfnk"
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: W1119 22:57:11.658981     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-58a7803e26d91dadbd3e204bcd9ae896a088341e927f6731344fc57d8f69161b WatchSource:0}: Error finding container 58a7803e26d91dadbd3e204bcd9ae896a088341e927f6731344fc57d8f69161b: Status 404 returned error can't find the container with id 58a7803e26d91dadbd3e204bcd9ae896a088341e927f6731344fc57d8f69161b
	Nov 19 22:57:11 old-k8s-version-191961 kubelet[794]: W1119 22:57:11.706804     794 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e6ae989c9f9951be8f3ee14398882458f676c9869029b20edd9529af968575ee/crio-ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168 WatchSource:0}: Error finding container ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168: Status 404 returned error can't find the container with id ad7030e40333832a21ced7f9208bc4679a0f52ceca1964798e4b0f27e9e96168
	Nov 19 22:57:28 old-k8s-version-191961 kubelet[794]: I1119 22:57:28.085399     794 scope.go:117] "RemoveContainer" containerID="98b505204789766526dcab1f128451e826363cf3c3a53eb40bd1516bcf2a26c3"
	Nov 19 22:57:28 old-k8s-version-191961 kubelet[794]: I1119 22:57:28.123565     794 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mxfnk" podStartSLOduration=8.930762704 podCreationTimestamp="2025-11-19 22:57:11 +0000 UTC" firstStartedPulling="2025-11-19 22:57:11.66592631 +0000 UTC m=+17.993214362" lastFinishedPulling="2025-11-19 22:57:19.858665922 +0000 UTC m=+26.185953974" observedRunningTime="2025-11-19 22:57:20.046437276 +0000 UTC m=+26.373725328" watchObservedRunningTime="2025-11-19 22:57:28.123502316 +0000 UTC m=+34.450790368"
	Nov 19 22:57:29 old-k8s-version-191961 kubelet[794]: I1119 22:57:29.089126     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:29 old-k8s-version-191961 kubelet[794]: I1119 22:57:29.089516     794 scope.go:117] "RemoveContainer" containerID="98b505204789766526dcab1f128451e826363cf3c3a53eb40bd1516bcf2a26c3"
	Nov 19 22:57:29 old-k8s-version-191961 kubelet[794]: E1119 22:57:29.094900     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:30 old-k8s-version-191961 kubelet[794]: I1119 22:57:30.093122     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:30 old-k8s-version-191961 kubelet[794]: E1119 22:57:30.094047     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:31 old-k8s-version-191961 kubelet[794]: I1119 22:57:31.097648     794 scope.go:117] "RemoveContainer" containerID="b56f2b24a58be4362c56c2784b7237094d790158c7341ea0cadd8dff6a5cf241"
	Nov 19 22:57:31 old-k8s-version-191961 kubelet[794]: I1119 22:57:31.622775     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:31 old-k8s-version-191961 kubelet[794]: E1119 22:57:31.623605     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:42 old-k8s-version-191961 kubelet[794]: I1119 22:57:42.858683     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:43 old-k8s-version-191961 kubelet[794]: I1119 22:57:43.133836     794 scope.go:117] "RemoveContainer" containerID="f4821dc20944fa9dd228268b13b5d4421038ad452e99c6196469071fa9d90eb4"
	Nov 19 22:57:43 old-k8s-version-191961 kubelet[794]: I1119 22:57:43.134785     794 scope.go:117] "RemoveContainer" containerID="1e677e29cd3752fa7ed7d8009207a09bc32e17c99f85cb493f74623402c4abb8"
	Nov 19 22:57:43 old-k8s-version-191961 kubelet[794]: E1119 22:57:43.135212     794 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4f2bw_kubernetes-dashboard(320ae2e5-a523-4c14-9c8d-277f0d7218a2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4f2bw" podUID="320ae2e5-a523-4c14-9c8d-277f0d7218a2"
	Nov 19 22:57:51 old-k8s-version-191961 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:57:51 old-k8s-version-191961 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:57:51 old-k8s-version-191961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [15f172e1e9015f511d2f9658c286a7d2e124c2f8d7ce49a0480611b9338af010] <==
	2025/11/19 22:57:19 Using namespace: kubernetes-dashboard
	2025/11/19 22:57:19 Using in-cluster config to connect to apiserver
	2025/11/19 22:57:19 Using secret token for csrf signing
	2025/11/19 22:57:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:57:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:57:20 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 22:57:20 Generating JWE encryption key
	2025/11/19 22:57:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:57:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:57:21 Initializing JWE encryption key from synchronized object
	2025/11/19 22:57:21 Creating in-cluster Sidecar client
	2025/11/19 22:57:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:57:21 Serving insecurely on HTTP port: 9090
	2025/11/19 22:57:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:57:19 Starting overwatch
	
	
	==> storage-provisioner [7c81dcbf758f34246c6eb872955ba1903e0b6b3d9cf8cdf578aa6ba6198b72ac] <==
	I1119 22:57:31.198769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:57:31.227300       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:57:31.227429       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:57:48.656854       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:57:48.657005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b686f98b-4f98-4c94-964a-7a5f07bf7388", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-191961_2524a198-876e-41b9-b54d-a28ccb1d36d9 became leader
	I1119 22:57:48.657027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-191961_2524a198-876e-41b9-b54d-a28ccb1d36d9!
	I1119 22:57:48.757949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-191961_2524a198-876e-41b9-b54d-a28ccb1d36d9!
	
	
	==> storage-provisioner [b56f2b24a58be4362c56c2784b7237094d790158c7341ea0cadd8dff6a5cf241] <==
	I1119 22:57:00.566461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:57:30.568921       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191961 -n old-k8s-version-191961
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191961 -n old-k8s-version-191961: exit status 2 (358.581076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-191961 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-018508 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-018508 --alsologtostderr -v=1: exit status 80 (2.570981077s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-018508 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:58:09.023573 1066181 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:09.023817 1066181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:09.023847 1066181 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:09.023866 1066181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:09.024179 1066181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:58:09.024487 1066181 out.go:368] Setting JSON to false
	I1119 22:58:09.024540 1066181 mustload.go:66] Loading cluster: no-preload-018508
	I1119 22:58:09.024999 1066181 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:09.025638 1066181 cli_runner.go:164] Run: docker container inspect no-preload-018508 --format={{.State.Status}}
	I1119 22:58:09.044920 1066181 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:58:09.045344 1066181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:09.125553 1066181 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-19 22:58:09.111384023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:09.126369 1066181 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-018508 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:58:09.129889 1066181 out.go:179] * Pausing node no-preload-018508 ... 
	I1119 22:58:09.133626 1066181 host.go:66] Checking if "no-preload-018508" exists ...
	I1119 22:58:09.133961 1066181 ssh_runner.go:195] Run: systemctl --version
	I1119 22:58:09.134005 1066181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-018508
	I1119 22:58:09.155918 1066181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33856 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/no-preload-018508/id_rsa Username:docker}
	I1119 22:58:09.266104 1066181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:58:09.290799 1066181 pause.go:52] kubelet running: true
	I1119 22:58:09.290945 1066181 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:58:09.566427 1066181 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:58:09.566586 1066181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:58:09.635628 1066181 cri.go:89] found id: "43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3"
	I1119 22:58:09.635663 1066181 cri.go:89] found id: "5f2f114727d3d059b9b8af978a47a4fe033228cef877486ac19ab0d8650c9bae"
	I1119 22:58:09.635670 1066181 cri.go:89] found id: "bbe29ffec8970b6045f9d2a1c3b698f114f1d930f930f4bd1382f20f9cd47ab3"
	I1119 22:58:09.635704 1066181 cri.go:89] found id: "d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3"
	I1119 22:58:09.635708 1066181 cri.go:89] found id: "f6cf0d00f87f79bf613dfd53ab0a83cea7be8070227f69030439ebba4e5d48f9"
	I1119 22:58:09.635712 1066181 cri.go:89] found id: "8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d"
	I1119 22:58:09.635716 1066181 cri.go:89] found id: "1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08"
	I1119 22:58:09.635719 1066181 cri.go:89] found id: "0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f"
	I1119 22:58:09.635722 1066181 cri.go:89] found id: "0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1"
	I1119 22:58:09.635729 1066181 cri.go:89] found id: "adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	I1119 22:58:09.635737 1066181 cri.go:89] found id: "ec6ca133849d9d4a8e95fe911a11e4e72d5669bcf700d711fffdd234c6450a3f"
	I1119 22:58:09.635740 1066181 cri.go:89] found id: ""
	I1119 22:58:09.635809 1066181 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:58:09.647009 1066181 retry.go:31] will retry after 255.791855ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:58:09Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:58:09.903548 1066181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:58:09.916659 1066181 pause.go:52] kubelet running: false
	I1119 22:58:09.916728 1066181 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:58:10.084655 1066181 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:58:10.084759 1066181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:58:10.153053 1066181 cri.go:89] found id: "43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3"
	I1119 22:58:10.153078 1066181 cri.go:89] found id: "5f2f114727d3d059b9b8af978a47a4fe033228cef877486ac19ab0d8650c9bae"
	I1119 22:58:10.153084 1066181 cri.go:89] found id: "bbe29ffec8970b6045f9d2a1c3b698f114f1d930f930f4bd1382f20f9cd47ab3"
	I1119 22:58:10.153088 1066181 cri.go:89] found id: "d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3"
	I1119 22:58:10.153092 1066181 cri.go:89] found id: "f6cf0d00f87f79bf613dfd53ab0a83cea7be8070227f69030439ebba4e5d48f9"
	I1119 22:58:10.153096 1066181 cri.go:89] found id: "8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d"
	I1119 22:58:10.153100 1066181 cri.go:89] found id: "1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08"
	I1119 22:58:10.153105 1066181 cri.go:89] found id: "0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f"
	I1119 22:58:10.153108 1066181 cri.go:89] found id: "0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1"
	I1119 22:58:10.153115 1066181 cri.go:89] found id: "adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	I1119 22:58:10.153119 1066181 cri.go:89] found id: "ec6ca133849d9d4a8e95fe911a11e4e72d5669bcf700d711fffdd234c6450a3f"
	I1119 22:58:10.153123 1066181 cri.go:89] found id: ""
	I1119 22:58:10.153173 1066181 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:58:10.164437 1066181 retry.go:31] will retry after 405.880621ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:58:10Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:58:10.571143 1066181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:58:10.584062 1066181 pause.go:52] kubelet running: false
	I1119 22:58:10.584164 1066181 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:58:10.747215 1066181 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:58:10.747293 1066181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:58:10.849373 1066181 cri.go:89] found id: "43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3"
	I1119 22:58:10.849398 1066181 cri.go:89] found id: "5f2f114727d3d059b9b8af978a47a4fe033228cef877486ac19ab0d8650c9bae"
	I1119 22:58:10.849404 1066181 cri.go:89] found id: "bbe29ffec8970b6045f9d2a1c3b698f114f1d930f930f4bd1382f20f9cd47ab3"
	I1119 22:58:10.849408 1066181 cri.go:89] found id: "d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3"
	I1119 22:58:10.849411 1066181 cri.go:89] found id: "f6cf0d00f87f79bf613dfd53ab0a83cea7be8070227f69030439ebba4e5d48f9"
	I1119 22:58:10.849415 1066181 cri.go:89] found id: "8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d"
	I1119 22:58:10.849419 1066181 cri.go:89] found id: "1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08"
	I1119 22:58:10.849443 1066181 cri.go:89] found id: "0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f"
	I1119 22:58:10.849451 1066181 cri.go:89] found id: "0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1"
	I1119 22:58:10.849458 1066181 cri.go:89] found id: "adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	I1119 22:58:10.849465 1066181 cri.go:89] found id: "ec6ca133849d9d4a8e95fe911a11e4e72d5669bcf700d711fffdd234c6450a3f"
	I1119 22:58:10.849468 1066181 cri.go:89] found id: ""
	I1119 22:58:10.849532 1066181 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:58:10.866355 1066181 retry.go:31] will retry after 290.841907ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:58:10Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:58:11.157841 1066181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:58:11.172949 1066181 pause.go:52] kubelet running: false
	I1119 22:58:11.173017 1066181 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:58:11.381153 1066181 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:58:11.381242 1066181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:58:11.494269 1066181 cri.go:89] found id: "43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3"
	I1119 22:58:11.494309 1066181 cri.go:89] found id: "5f2f114727d3d059b9b8af978a47a4fe033228cef877486ac19ab0d8650c9bae"
	I1119 22:58:11.494315 1066181 cri.go:89] found id: "bbe29ffec8970b6045f9d2a1c3b698f114f1d930f930f4bd1382f20f9cd47ab3"
	I1119 22:58:11.494318 1066181 cri.go:89] found id: "d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3"
	I1119 22:58:11.494322 1066181 cri.go:89] found id: "f6cf0d00f87f79bf613dfd53ab0a83cea7be8070227f69030439ebba4e5d48f9"
	I1119 22:58:11.494325 1066181 cri.go:89] found id: "8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d"
	I1119 22:58:11.494328 1066181 cri.go:89] found id: "1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08"
	I1119 22:58:11.494332 1066181 cri.go:89] found id: "0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f"
	I1119 22:58:11.494335 1066181 cri.go:89] found id: "0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1"
	I1119 22:58:11.494342 1066181 cri.go:89] found id: "adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	I1119 22:58:11.494345 1066181 cri.go:89] found id: "ec6ca133849d9d4a8e95fe911a11e4e72d5669bcf700d711fffdd234c6450a3f"
	I1119 22:58:11.494348 1066181 cri.go:89] found id: ""
	I1119 22:58:11.494396 1066181 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:58:11.509700 1066181 out.go:203] 
	W1119 22:58:11.512578 1066181 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:58:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:58:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:58:11.512602 1066181 out.go:285] * 
	* 
	W1119 22:58:11.519942 1066181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:58:11.521915 1066181 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-018508 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-018508
helpers_test.go:243: (dbg) docker inspect no-preload-018508:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90",
	        "Created": "2025-11-19T22:55:31.446403274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1061316,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:57:04.320973334Z",
	            "FinishedAt": "2025-11-19T22:57:03.474312676Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/hostname",
	        "HostsPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/hosts",
	        "LogPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90-json.log",
	        "Name": "/no-preload-018508",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-018508:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-018508",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90",
	                "LowerDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-018508",
	                "Source": "/var/lib/docker/volumes/no-preload-018508/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-018508",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-018508",
	                "name.minikube.sigs.k8s.io": "no-preload-018508",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5be25482303bed2c517e30c7bd606d8a0595b3e52d34e10fc0a3fbda57f117ef",
	            "SandboxKey": "/var/run/docker/netns/5be25482303b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-018508": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:dc:e5:fa:b3:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecb686e72be7045ea0f2163632862012f1ddc546b19b453f0aeaec0f227ef432",
	                    "EndpointID": "a2739df0e97704758887c352e11572ff05a2daff75788b923f5efca6c930dd8e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-018508",
	                        "9259db4142ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508: exit status 2 (408.435922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-018508 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-018508 logs -n 25: (1.5272505s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:52 UTC │
	│ delete  │ -p kubernetes-upgrade-154655                                                                                                                                                                                                                  │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:54 UTC │
	│ start   │ -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ cert-options-110863 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ -p cert-options-110863 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ delete  │ -p cert-options-110863                                                                                                                                                                                                                        │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p old-k8s-version-191961 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                                                                                               │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665        │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                                                                                                    │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:58:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:58:01.257209 1065173 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:01.257406 1065173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:01.257439 1065173 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:01.257465 1065173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:01.257769 1065173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:58:01.258304 1065173 out.go:368] Setting JSON to false
	I1119 22:58:01.259358 1065173 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16810,"bootTime":1763576271,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:58:01.259472 1065173 start.go:143] virtualization:  
	I1119 22:58:01.262786 1065173 out.go:179] * [embed-certs-044665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:58:01.266289 1065173 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:01.266387 1065173 notify.go:221] Checking for updates...
	I1119 22:58:01.272143 1065173 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:01.275065 1065173 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:58:01.277815 1065173 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:58:01.280569 1065173 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:58:01.283512 1065173 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:01.287087 1065173 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:01.287228 1065173 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:01.333439 1065173 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:58:01.333602 1065173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:01.397878 1065173 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:58:01.388167878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:01.398000 1065173 docker.go:319] overlay module found
	I1119 22:58:01.401088 1065173 out.go:179] * Using the docker driver based on user configuration
	I1119 22:58:01.403994 1065173 start.go:309] selected driver: docker
	I1119 22:58:01.404032 1065173 start.go:930] validating driver "docker" against <nil>
	I1119 22:58:01.404049 1065173 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:01.404823 1065173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:01.468032 1065173 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:58:01.45773906 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:01.468199 1065173 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:58:01.468440 1065173 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:01.471408 1065173 out.go:179] * Using Docker driver with root privileges
	I1119 22:58:01.474272 1065173 cni.go:84] Creating CNI manager for ""
	I1119 22:58:01.474347 1065173 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:01.474366 1065173 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:58:01.474457 1065173 start.go:353] cluster config:
	{Name:embed-certs-044665 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-044665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:01.479460 1065173 out.go:179] * Starting "embed-certs-044665" primary control-plane node in "embed-certs-044665" cluster
	I1119 22:58:01.482291 1065173 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:58:01.485354 1065173 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:58:01.488366 1065173 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:01.488423 1065173 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 22:58:01.488434 1065173 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:01.488461 1065173 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:58:01.488533 1065173 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 22:58:01.488549 1065173 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:01.488739 1065173 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/embed-certs-044665/config.json ...
	I1119 22:58:01.488792 1065173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/embed-certs-044665/config.json: {Name:mk60ca82c28fe50e3e695074dc92f4acdba5e4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:01.510371 1065173 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:58:01.510398 1065173 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:58:01.510412 1065173 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:58:01.510437 1065173 start.go:360] acquireMachinesLock for embed-certs-044665: {Name:mk3e75b936e993efaa78e45607d8bed91c151b66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:01.510549 1065173 start.go:364] duration metric: took 91.955µs to acquireMachinesLock for "embed-certs-044665"
	I1119 22:58:01.510579 1065173 start.go:93] Provisioning new machine with config: &{Name:embed-certs-044665 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-044665 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:58:01.510653 1065173 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:58:01.514645 1065173 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:58:01.514919 1065173 start.go:159] libmachine.API.Create for "embed-certs-044665" (driver="docker")
	I1119 22:58:01.514972 1065173 client.go:173] LocalClient.Create starting
	I1119 22:58:01.515063 1065173 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 22:58:01.515104 1065173 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:01.515135 1065173 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:01.515195 1065173 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 22:58:01.515228 1065173 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:01.515240 1065173 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:01.515622 1065173 cli_runner.go:164] Run: docker network inspect embed-certs-044665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:58:01.531969 1065173 cli_runner.go:211] docker network inspect embed-certs-044665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:58:01.532060 1065173 network_create.go:284] running [docker network inspect embed-certs-044665] to gather additional debugging logs...
	I1119 22:58:01.532099 1065173 cli_runner.go:164] Run: docker network inspect embed-certs-044665
	W1119 22:58:01.549514 1065173 cli_runner.go:211] docker network inspect embed-certs-044665 returned with exit code 1
	I1119 22:58:01.549549 1065173 network_create.go:287] error running [docker network inspect embed-certs-044665]: docker network inspect embed-certs-044665: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-044665 not found
	I1119 22:58:01.549564 1065173 network_create.go:289] output of [docker network inspect embed-certs-044665]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-044665 not found
	
	** /stderr **
	I1119 22:58:01.549662 1065173 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:58:01.570684 1065173 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
	I1119 22:58:01.571081 1065173 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-409f9deb7199 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:64:cf:3b:93:91} reservation:<nil>}
	I1119 22:58:01.571478 1065173 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-601de6a5616d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:01:2f:20:8b:a3} reservation:<nil>}
	I1119 22:58:01.571934 1065173 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dc950}
	I1119 22:58:01.571957 1065173 network_create.go:124] attempt to create docker network embed-certs-044665 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 22:58:01.572023 1065173 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-044665 embed-certs-044665
	I1119 22:58:01.637326 1065173 network_create.go:108] docker network embed-certs-044665 192.168.76.0/24 created
	I1119 22:58:01.637366 1065173 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-044665" container
	I1119 22:58:01.637478 1065173 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:58:01.665587 1065173 cli_runner.go:164] Run: docker volume create embed-certs-044665 --label name.minikube.sigs.k8s.io=embed-certs-044665 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:58:01.684517 1065173 oci.go:103] Successfully created a docker volume embed-certs-044665
	I1119 22:58:01.684617 1065173 cli_runner.go:164] Run: docker run --rm --name embed-certs-044665-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-044665 --entrypoint /usr/bin/test -v embed-certs-044665:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:58:02.279750 1065173 oci.go:107] Successfully prepared a docker volume embed-certs-044665
	I1119 22:58:02.279842 1065173 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:02.279861 1065173 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:58:02.279940 1065173 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-044665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:58:06.693100 1065173 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-044665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413121387s)
	I1119 22:58:06.693130 1065173 kic.go:203] duration metric: took 4.413266537s to extract preloaded images to volume ...
	W1119 22:58:06.693280 1065173 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:58:06.693384 1065173 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:58:06.753909 1065173 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-044665 --name embed-certs-044665 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-044665 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-044665 --network embed-certs-044665 --ip 192.168.76.2 --volume embed-certs-044665:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:58:07.071035 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Running}}
	I1119 22:58:07.090151 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:07.128661 1065173 cli_runner.go:164] Run: docker exec embed-certs-044665 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:58:07.180775 1065173 oci.go:144] the created container "embed-certs-044665" has a running status.
	I1119 22:58:07.180803 1065173 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa...
	I1119 22:58:07.632742 1065173 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:58:07.652046 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:07.669911 1065173 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:58:07.669930 1065173 kic_runner.go:114] Args: [docker exec --privileged embed-certs-044665 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:58:07.709513 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:07.728871 1065173 machine.go:94] provisionDockerMachine start ...
	I1119 22:58:07.728976 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:07.753799 1065173 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:07.754282 1065173 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1119 22:58:07.754296 1065173 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:58:07.754855 1065173 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42470->127.0.0.1:33861: read: connection reset by peer
	I1119 22:58:10.906581 1065173 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-044665
	
	I1119 22:58:10.906604 1065173 ubuntu.go:182] provisioning hostname "embed-certs-044665"
	I1119 22:58:10.906668 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:10.926149 1065173 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:10.926485 1065173 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1119 22:58:10.926497 1065173 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-044665 && echo "embed-certs-044665" | sudo tee /etc/hostname
	I1119 22:58:11.081396 1065173 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-044665
	
	I1119 22:58:11.081540 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:11.099188 1065173 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:11.099510 1065173 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1119 22:58:11.099534 1065173 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044665/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:58:11.255260 1065173 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:58:11.255300 1065173 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:58:11.255350 1065173 ubuntu.go:190] setting up certificates
	I1119 22:58:11.255360 1065173 provision.go:84] configureAuth start
	I1119 22:58:11.255443 1065173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-044665
	
	
	==> CRI-O <==
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.299799082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.300112176Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d4457270dac4919e93446f82687efc72c276005818c0a52bee39a22c5cc7e037/merged/etc/passwd: no such file or directory"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.300205387Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d4457270dac4919e93446f82687efc72c276005818c0a52bee39a22c5cc7e037/merged/etc/group: no such file or directory"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.300533217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.320322952Z" level=info msg="Created container 43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3: kube-system/storage-provisioner/storage-provisioner" id=14719fac-9f9d-4f3c-9a00-82e95d4438a1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.327620771Z" level=info msg="Starting container: 43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3" id=9054787f-6213-4e78-abc9-418c06f99c27 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.329467751Z" level=info msg="Started container" PID=1672 containerID=43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3 description=kube-system/storage-provisioner/storage-provisioner id=9054787f-6213-4e78-abc9-418c06f99c27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d0fa146b09e30889b96012d5023dd1df1fd989402be49619fcdc36de40c1002
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.292302235Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.298754854Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.298805521Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.298825206Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.321198116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.321248751Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.321272464Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.339590747Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.339625078Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.33964843Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.34544707Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.345484371Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.345506197Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.358472891Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.358678522Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.358804012Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.364505552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.36468436Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	43a3a3b51a954       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago       Running             storage-provisioner         2                   8d0fa146b09e3       storage-provisioner                          kube-system
	adfda55cb43d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   174d833a0f3cd       dashboard-metrics-scraper-6ffb444bf9-pzlsp   kubernetes-dashboard
	ec6ca133849d9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   bb3107e3c891e       kubernetes-dashboard-855c9754f9-hpp5l        kubernetes-dashboard
	ebd5a3d9d7d33       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago       Running             busybox                     1                   2c0bc1c18660b       busybox                                      default
	5f2f114727d3d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   57122ad585246       kindnet-2n4sq                                kube-system
	bbe29ffec8970       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   ebbec72a27c27       coredns-66bc5c9577-rxhmf                     kube-system
	d0968886aae29       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago       Exited              storage-provisioner         1                   8d0fa146b09e3       storage-provisioner                          kube-system
	f6cf0d00f87f7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   48d3a809c2045       kube-proxy-pn4pw                             kube-system
	8241a31d9950d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   633e4178e557a       kube-apiserver-no-preload-018508             kube-system
	1f88f376101bc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7a790a14b7538       kube-scheduler-no-preload-018508             kube-system
	0ca77d79cf856       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7489e2a2422c2       etcd-no-preload-018508                       kube-system
	0b7a4c896c79b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9429de71377f9       kube-controller-manager-no-preload-018508    kube-system
	
	
	==> coredns [bbe29ffec8970b6045f9d2a1c3b698f114f1d930f930f4bd1382f20f9cd47ab3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60262 - 35694 "HINFO IN 8982507046229991958.2596567006210547750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039453582s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-018508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-018508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-018508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_56_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:56:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-018508
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:58:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-018508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                09a9f2b2-499b-4381-b448-723471f1496f
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-rxhmf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-018508                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-2n4sq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-018508              250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-018508     200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-pn4pw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-018508              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pzlsp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hpp5l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 109s               kube-proxy       
	  Normal   Starting                 48s                kube-proxy       
	  Normal   NodeHasSufficientPID     115s               kubelet          Node no-preload-018508 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 115s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  115s               kubelet          Node no-preload-018508 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    115s               kubelet          Node no-preload-018508 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 115s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           111s               node-controller  Node no-preload-018508 event: Registered Node no-preload-018508 in Controller
	  Normal   NodeReady                96s                kubelet          Node no-preload-018508 status is now: NodeReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node no-preload-018508 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node no-preload-018508 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node no-preload-018508 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node no-preload-018508 event: Registered Node no-preload-018508 in Controller
	
	
	==> dmesg <==
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f] <==
	{"level":"warn","ts":"2025-11-19T22:57:17.400873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.440069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.494675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.543441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.610852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.634588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.670071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.708890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.775219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.889832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.957737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.032257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.140507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.160003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.182031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.210968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.269838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.285283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.335055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.353082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.373015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.422411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.461083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.496691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.664026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49520","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:58:13 up  4:40,  0 user,  load average: 3.59, 3.11, 2.49
	Linux no-preload-018508 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f2f114727d3d059b9b8af978a47a4fe033228cef877486ac19ab0d8650c9bae] <==
	I1119 22:57:21.959917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:57:21.960152       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:57:21.960274       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:57:21.960291       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:57:21.960303       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:57:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:57:22.309057       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:57:22.309087       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:57:22.309098       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:57:22.309252       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:57:52.288880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:57:52.289081       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:57:52.310834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 22:57:52.316598       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1119 22:57:53.710268       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:57:53.710388       1 metrics.go:72] Registering metrics
	I1119 22:57:53.710503       1 controller.go:711] "Syncing nftables rules"
	I1119 22:58:02.291281       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:58:02.292064       1 main.go:301] handling current node
	I1119 22:58:12.294930       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:58:12.294964       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d] <==
	I1119 22:57:20.717153       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:57:20.717197       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:57:20.735891       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 22:57:20.736011       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:57:20.737315       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 22:57:20.737453       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:57:20.737469       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:57:20.737478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:57:20.737484       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:57:20.738397       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:57:20.747508       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 22:57:20.752151       1 policy_source.go:240] refreshing policies
	I1119 22:57:20.761885       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1119 22:57:20.852017       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:57:20.853749       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:57:21.200002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:57:23.077777       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:57:23.352456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:57:23.527802       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:57:23.585045       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:57:23.848300       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.104.99"}
	I1119 22:57:23.889100       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.153.117"}
	I1119 22:57:24.756903       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:57:25.092294       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:57:25.276196       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1] <==
	I1119 22:57:24.697574       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:57:24.704744       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:57:24.711112       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:57:24.711199       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:57:24.711249       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:57:24.713058       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:57:24.715367       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:57:24.718073       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:57:24.718234       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:57:24.718350       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-018508"
	I1119 22:57:24.718430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:57:24.719455       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:57:24.744232       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 22:57:24.744322       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:57:24.744747       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:57:24.745198       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:57:24.750311       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:57:24.750339       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:57:24.750351       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:57:24.750370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:57:24.750467       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:57:24.757535       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:57:24.764522       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:57:24.764523       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:57:24.768541       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [f6cf0d00f87f79bf613dfd53ab0a83cea7be8070227f69030439ebba4e5d48f9] <==
	I1119 22:57:23.137021       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:57:23.448669       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:57:23.559118       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:57:23.567995       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:57:23.568074       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:57:23.961528       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:57:23.961639       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:57:24.025042       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:57:24.025461       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:57:24.025695       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:57:24.027390       1 config.go:200] "Starting service config controller"
	I1119 22:57:24.027461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:57:24.027522       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:57:24.027563       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:57:24.027601       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:57:24.027635       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:57:24.034033       1 config.go:309] "Starting node config controller"
	I1119 22:57:24.034110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:57:24.034119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:57:24.138972       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:57:24.139076       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:57:24.139102       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08] <==
	I1119 22:57:18.753683       1 serving.go:386] Generated self-signed cert in-memory
	I1119 22:57:20.877996       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:57:20.878035       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:57:20.932786       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 22:57:20.932838       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 22:57:20.932872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:57:20.932880       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:57:20.932894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:57:20.932908       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:57:20.938815       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:57:20.938930       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:57:21.035637       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 22:57:21.035762       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:57:21.042895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433197     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/efd9f3bc-bdd5-4976-b338-561dd5577ab9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hpp5l\" (UID: \"efd9f3bc-bdd5-4976-b338-561dd5577ab9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hpp5l"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433461     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/97951552-b1fa-440f-a46b-b695cd61cf8e-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-pzlsp\" (UID: \"97951552-b1fa-440f-a46b-b695cd61cf8e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433581     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xh9\" (UniqueName: \"kubernetes.io/projected/97951552-b1fa-440f-a46b-b695cd61cf8e-kube-api-access-66xh9\") pod \"dashboard-metrics-scraper-6ffb444bf9-pzlsp\" (UID: \"97951552-b1fa-440f-a46b-b695cd61cf8e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433695     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbgg\" (UniqueName: \"kubernetes.io/projected/efd9f3bc-bdd5-4976-b338-561dd5577ab9-kube-api-access-5qbgg\") pod \"kubernetes-dashboard-855c9754f9-hpp5l\" (UID: \"efd9f3bc-bdd5-4976-b338-561dd5577ab9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hpp5l"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: W1119 22:57:25.618073     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/crio-174d833a0f3cdd13fa31ab94d1968bb9979fd347d39a3298f7901e4ee33f5727 WatchSource:0}: Error finding container 174d833a0f3cdd13fa31ab94d1968bb9979fd347d39a3298f7901e4ee33f5727: Status 404 returned error can't find the container with id 174d833a0f3cdd13fa31ab94d1968bb9979fd347d39a3298f7901e4ee33f5727
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: W1119 22:57:25.890822     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/crio-bb3107e3c891ec2bf1e5c9b929d14156118e32c2178516e865f415fb993aba9f WatchSource:0}: Error finding container bb3107e3c891ec2bf1e5c9b929d14156118e32c2178516e865f415fb993aba9f: Status 404 returned error can't find the container with id bb3107e3c891ec2bf1e5c9b929d14156118e32c2178516e865f415fb993aba9f
	Nov 19 22:57:31 no-preload-018508 kubelet[784]: I1119 22:57:31.217317     784 scope.go:117] "RemoveContainer" containerID="0f46e56f93cef2d2d7062b0cbd29033e628a2fab858c3a6a721ad161d3ed66f7"
	Nov 19 22:57:32 no-preload-018508 kubelet[784]: I1119 22:57:32.226370     784 scope.go:117] "RemoveContainer" containerID="0f46e56f93cef2d2d7062b0cbd29033e628a2fab858c3a6a721ad161d3ed66f7"
	Nov 19 22:57:32 no-preload-018508 kubelet[784]: I1119 22:57:32.226652     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:32 no-preload-018508 kubelet[784]: E1119 22:57:32.226793     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:33 no-preload-018508 kubelet[784]: I1119 22:57:33.231085     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:33 no-preload-018508 kubelet[784]: E1119 22:57:33.231262     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:35 no-preload-018508 kubelet[784]: I1119 22:57:35.562785     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:35 no-preload-018508 kubelet[784]: E1119 22:57:35.563593     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:49 no-preload-018508 kubelet[784]: I1119 22:57:49.869879     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: I1119 22:57:50.274115     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: I1119 22:57:50.274460     784 scope.go:117] "RemoveContainer" containerID="adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: E1119 22:57:50.274642     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: I1119 22:57:50.298295     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hpp5l" podStartSLOduration=15.309457684 podStartE2EDuration="25.298277169s" podCreationTimestamp="2025-11-19 22:57:25 +0000 UTC" firstStartedPulling="2025-11-19 22:57:25.898146875 +0000 UTC m=+14.300325716" lastFinishedPulling="2025-11-19 22:57:35.88696636 +0000 UTC m=+24.289145201" observedRunningTime="2025-11-19 22:57:36.269058584 +0000 UTC m=+24.671237433" watchObservedRunningTime="2025-11-19 22:57:50.298277169 +0000 UTC m=+38.700456010"
	Nov 19 22:57:52 no-preload-018508 kubelet[784]: I1119 22:57:52.281206     784 scope.go:117] "RemoveContainer" containerID="d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3"
	Nov 19 22:57:55 no-preload-018508 kubelet[784]: I1119 22:57:55.563029     784 scope.go:117] "RemoveContainer" containerID="adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	Nov 19 22:57:55 no-preload-018508 kubelet[784]: E1119 22:57:55.563220     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:58:09 no-preload-018508 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:58:09 no-preload-018508 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:58:09 no-preload-018508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ec6ca133849d9d4a8e95fe911a11e4e72d5669bcf700d711fffdd234c6450a3f] <==
	2025/11/19 22:57:35 Using namespace: kubernetes-dashboard
	2025/11/19 22:57:35 Using in-cluster config to connect to apiserver
	2025/11/19 22:57:35 Using secret token for csrf signing
	2025/11/19 22:57:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:57:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:57:35 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:57:35 Generating JWE encryption key
	2025/11/19 22:57:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:57:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:57:36 Initializing JWE encryption key from synchronized object
	2025/11/19 22:57:36 Creating in-cluster Sidecar client
	2025/11/19 22:57:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:57:36 Serving insecurely on HTTP port: 9090
	2025/11/19 22:58:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:57:35 Starting overwatch
	
	
	==> storage-provisioner [43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3] <==
	I1119 22:57:52.343950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:57:52.355458       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:57:52.355517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:57:52.357646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:57:55.813416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:00.077738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:03.675748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:06.729602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:09.752493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:09.759888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:58:09.760041       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:58:09.760227       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-018508_c6c26e0c-aa2a-461e-a979-d285e8c5a46b!
	I1119 22:58:09.760858       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f174fb19-ca6a-48a2-8622-e239a84010c4", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-018508_c6c26e0c-aa2a-461e-a979-d285e8c5a46b became leader
	W1119 22:58:09.767768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:09.771124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:58:09.860605       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-018508_c6c26e0c-aa2a-461e-a979-d285e8c5a46b!
	W1119 22:58:11.775437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:11.781294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3] <==
	I1119 22:57:21.857203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:57:51.859403       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018508 -n no-preload-018508
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018508 -n no-preload-018508: exit status 2 (461.870653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-018508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-018508
helpers_test.go:243: (dbg) docker inspect no-preload-018508:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90",
	        "Created": "2025-11-19T22:55:31.446403274Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1061316,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:57:04.320973334Z",
	            "FinishedAt": "2025-11-19T22:57:03.474312676Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/hostname",
	        "HostsPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/hosts",
	        "LogPath": "/var/lib/docker/containers/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90-json.log",
	        "Name": "/no-preload-018508",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-018508:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-018508",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90",
	                "LowerDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/089a009a888654720163ac388c88bb961779c83bd82810dbce0a8b3104a6030e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-018508",
	                "Source": "/var/lib/docker/volumes/no-preload-018508/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-018508",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-018508",
	                "name.minikube.sigs.k8s.io": "no-preload-018508",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5be25482303bed2c517e30c7bd606d8a0595b3e52d34e10fc0a3fbda57f117ef",
	            "SandboxKey": "/var/run/docker/netns/5be25482303b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-018508": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:dc:e5:fa:b3:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ecb686e72be7045ea0f2163632862012f1ddc546b19b453f0aeaec0f227ef432",
	                    "EndpointID": "a2739df0e97704758887c352e11572ff05a2daff75788b923f5efca6c930dd8e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-018508",
	                        "9259db4142ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508: exit status 2 (442.647124ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-018508 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-018508 logs -n 25: (1.686205935s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:51 UTC │ 19 Nov 25 22:52 UTC │
	│ delete  │ -p kubernetes-upgrade-154655                                                                                                                                                                                                                  │ kubernetes-upgrade-154655 │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:54 UTC │
	│ start   │ -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:54 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ cert-options-110863 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ ssh     │ -p cert-options-110863 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ delete  │ -p cert-options-110863                                                                                                                                                                                                                        │ cert-options-110863       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214    │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p old-k8s-version-191961 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                                                                                               │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961    │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665        │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                                                                                                    │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-018508         │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:58:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:58:01.257209 1065173 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:01.257406 1065173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:01.257439 1065173 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:01.257465 1065173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:01.257769 1065173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:58:01.258304 1065173 out.go:368] Setting JSON to false
	I1119 22:58:01.259358 1065173 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16810,"bootTime":1763576271,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:58:01.259472 1065173 start.go:143] virtualization:  
	I1119 22:58:01.262786 1065173 out.go:179] * [embed-certs-044665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:58:01.266289 1065173 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:01.266387 1065173 notify.go:221] Checking for updates...
	I1119 22:58:01.272143 1065173 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:01.275065 1065173 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:58:01.277815 1065173 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:58:01.280569 1065173 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:58:01.283512 1065173 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:01.287087 1065173 config.go:182] Loaded profile config "no-preload-018508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:01.287228 1065173 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:01.333439 1065173 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:58:01.333602 1065173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:01.397878 1065173 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:58:01.388167878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:01.398000 1065173 docker.go:319] overlay module found
	I1119 22:58:01.401088 1065173 out.go:179] * Using the docker driver based on user configuration
	I1119 22:58:01.403994 1065173 start.go:309] selected driver: docker
	I1119 22:58:01.404032 1065173 start.go:930] validating driver "docker" against <nil>
	I1119 22:58:01.404049 1065173 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:01.404823 1065173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:01.468032 1065173 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:58:01.45773906 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:01.468199 1065173 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:58:01.468440 1065173 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:01.471408 1065173 out.go:179] * Using Docker driver with root privileges
	I1119 22:58:01.474272 1065173 cni.go:84] Creating CNI manager for ""
	I1119 22:58:01.474347 1065173 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:01.474366 1065173 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:58:01.474457 1065173 start.go:353] cluster config:
	{Name:embed-certs-044665 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-044665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:01.479460 1065173 out.go:179] * Starting "embed-certs-044665" primary control-plane node in "embed-certs-044665" cluster
	I1119 22:58:01.482291 1065173 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:58:01.485354 1065173 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:58:01.488366 1065173 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:01.488423 1065173 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 22:58:01.488434 1065173 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:01.488461 1065173 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:58:01.488533 1065173 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 22:58:01.488549 1065173 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:01.488739 1065173 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/embed-certs-044665/config.json ...
	I1119 22:58:01.488792 1065173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/embed-certs-044665/config.json: {Name:mk60ca82c28fe50e3e695074dc92f4acdba5e4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:01.510371 1065173 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:58:01.510398 1065173 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:58:01.510412 1065173 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:58:01.510437 1065173 start.go:360] acquireMachinesLock for embed-certs-044665: {Name:mk3e75b936e993efaa78e45607d8bed91c151b66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:01.510549 1065173 start.go:364] duration metric: took 91.955µs to acquireMachinesLock for "embed-certs-044665"
	I1119 22:58:01.510579 1065173 start.go:93] Provisioning new machine with config: &{Name:embed-certs-044665 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-044665 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:58:01.510653 1065173 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:58:01.514645 1065173 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:58:01.514919 1065173 start.go:159] libmachine.API.Create for "embed-certs-044665" (driver="docker")
	I1119 22:58:01.514972 1065173 client.go:173] LocalClient.Create starting
	I1119 22:58:01.515063 1065173 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 22:58:01.515104 1065173 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:01.515135 1065173 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:01.515195 1065173 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 22:58:01.515228 1065173 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:01.515240 1065173 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:01.515622 1065173 cli_runner.go:164] Run: docker network inspect embed-certs-044665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:58:01.531969 1065173 cli_runner.go:211] docker network inspect embed-certs-044665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:58:01.532060 1065173 network_create.go:284] running [docker network inspect embed-certs-044665] to gather additional debugging logs...
	I1119 22:58:01.532099 1065173 cli_runner.go:164] Run: docker network inspect embed-certs-044665
	W1119 22:58:01.549514 1065173 cli_runner.go:211] docker network inspect embed-certs-044665 returned with exit code 1
	I1119 22:58:01.549549 1065173 network_create.go:287] error running [docker network inspect embed-certs-044665]: docker network inspect embed-certs-044665: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-044665 not found
	I1119 22:58:01.549564 1065173 network_create.go:289] output of [docker network inspect embed-certs-044665]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-044665 not found
	
	** /stderr **
	I1119 22:58:01.549662 1065173 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:58:01.570684 1065173 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
	I1119 22:58:01.571081 1065173 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-409f9deb7199 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:64:cf:3b:93:91} reservation:<nil>}
	I1119 22:58:01.571478 1065173 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-601de6a5616d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:01:2f:20:8b:a3} reservation:<nil>}
	I1119 22:58:01.571934 1065173 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dc950}
	I1119 22:58:01.571957 1065173 network_create.go:124] attempt to create docker network embed-certs-044665 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 22:58:01.572023 1065173 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-044665 embed-certs-044665
	I1119 22:58:01.637326 1065173 network_create.go:108] docker network embed-certs-044665 192.168.76.0/24 created
	I1119 22:58:01.637366 1065173 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-044665" container
	I1119 22:58:01.637478 1065173 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:58:01.665587 1065173 cli_runner.go:164] Run: docker volume create embed-certs-044665 --label name.minikube.sigs.k8s.io=embed-certs-044665 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:58:01.684517 1065173 oci.go:103] Successfully created a docker volume embed-certs-044665
	I1119 22:58:01.684617 1065173 cli_runner.go:164] Run: docker run --rm --name embed-certs-044665-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-044665 --entrypoint /usr/bin/test -v embed-certs-044665:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:58:02.279750 1065173 oci.go:107] Successfully prepared a docker volume embed-certs-044665
	I1119 22:58:02.279842 1065173 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:02.279861 1065173 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:58:02.279940 1065173 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-044665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:58:06.693100 1065173 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-044665:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413121387s)
	I1119 22:58:06.693130 1065173 kic.go:203] duration metric: took 4.413266537s to extract preloaded images to volume ...
	W1119 22:58:06.693280 1065173 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:58:06.693384 1065173 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:58:06.753909 1065173 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-044665 --name embed-certs-044665 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-044665 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-044665 --network embed-certs-044665 --ip 192.168.76.2 --volume embed-certs-044665:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:58:07.071035 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Running}}
	I1119 22:58:07.090151 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:07.128661 1065173 cli_runner.go:164] Run: docker exec embed-certs-044665 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:58:07.180775 1065173 oci.go:144] the created container "embed-certs-044665" has a running status.
	I1119 22:58:07.180803 1065173 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa...
	I1119 22:58:07.632742 1065173 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:58:07.652046 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:07.669911 1065173 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:58:07.669930 1065173 kic_runner.go:114] Args: [docker exec --privileged embed-certs-044665 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:58:07.709513 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:07.728871 1065173 machine.go:94] provisionDockerMachine start ...
	I1119 22:58:07.728976 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:07.753799 1065173 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:07.754282 1065173 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1119 22:58:07.754296 1065173 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:58:07.754855 1065173 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42470->127.0.0.1:33861: read: connection reset by peer
	I1119 22:58:10.906581 1065173 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-044665
	
	I1119 22:58:10.906604 1065173 ubuntu.go:182] provisioning hostname "embed-certs-044665"
	I1119 22:58:10.906668 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:10.926149 1065173 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:10.926485 1065173 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1119 22:58:10.926497 1065173 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-044665 && echo "embed-certs-044665" | sudo tee /etc/hostname
	I1119 22:58:11.081396 1065173 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-044665
	
	I1119 22:58:11.081540 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:11.099188 1065173 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:11.099510 1065173 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1119 22:58:11.099534 1065173 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044665/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:58:11.255260 1065173 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:58:11.255300 1065173 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:58:11.255350 1065173 ubuntu.go:190] setting up certificates
	I1119 22:58:11.255360 1065173 provision.go:84] configureAuth start
	I1119 22:58:11.255443 1065173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-044665
	I1119 22:58:11.274555 1065173 provision.go:143] copyHostCerts
	I1119 22:58:11.274627 1065173 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:58:11.274640 1065173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:58:11.274716 1065173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:58:11.274804 1065173 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:58:11.274815 1065173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:58:11.274841 1065173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:58:11.274916 1065173 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:58:11.274927 1065173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:58:11.274952 1065173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:58:11.275007 1065173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.embed-certs-044665 san=[127.0.0.1 192.168.76.2 embed-certs-044665 localhost minikube]
	I1119 22:58:11.816956 1065173 provision.go:177] copyRemoteCerts
	I1119 22:58:11.817077 1065173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:58:11.817143 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:11.843620 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:11.948487 1065173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:58:11.980264 1065173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:58:12.003549 1065173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:58:12.030158 1065173 provision.go:87] duration metric: took 774.771463ms to configureAuth
	I1119 22:58:12.030184 1065173 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:58:12.030374 1065173 config.go:182] Loaded profile config "embed-certs-044665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:12.030486 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:12.051985 1065173 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:12.052296 1065173 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1119 22:58:12.052311 1065173 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:58:12.500916 1065173 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:58:12.500941 1065173 machine.go:97] duration metric: took 4.772046932s to provisionDockerMachine
	I1119 22:58:12.500951 1065173 client.go:176] duration metric: took 10.985971399s to LocalClient.Create
	I1119 22:58:12.500964 1065173 start.go:167] duration metric: took 10.986047887s to libmachine.API.Create "embed-certs-044665"
	I1119 22:58:12.500971 1065173 start.go:293] postStartSetup for "embed-certs-044665" (driver="docker")
	I1119 22:58:12.500984 1065173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:58:12.501048 1065173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:58:12.501092 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:12.523285 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:12.633887 1065173 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:58:12.637725 1065173 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:58:12.637758 1065173 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:58:12.637776 1065173 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:58:12.637840 1065173 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:58:12.637931 1065173 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:58:12.638042 1065173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:58:12.647803 1065173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:58:12.673195 1065173 start.go:296] duration metric: took 172.208936ms for postStartSetup
	I1119 22:58:12.673577 1065173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-044665
	I1119 22:58:12.696177 1065173 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/embed-certs-044665/config.json ...
	I1119 22:58:12.696468 1065173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:58:12.696530 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:12.717535 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:12.823261 1065173 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:58:12.830393 1065173 start.go:128] duration metric: took 11.319724779s to createHost
	I1119 22:58:12.830466 1065173 start.go:83] releasing machines lock for "embed-certs-044665", held for 11.31990197s
	I1119 22:58:12.830567 1065173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-044665
	I1119 22:58:12.848734 1065173 ssh_runner.go:195] Run: cat /version.json
	I1119 22:58:12.848794 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:12.849053 1065173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:58:12.849117 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:12.891012 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:12.897238 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:13.117247 1065173 ssh_runner.go:195] Run: systemctl --version
	I1119 22:58:13.124624 1065173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:58:13.178203 1065173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:58:13.185826 1065173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:58:13.185897 1065173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:58:13.217324 1065173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:58:13.217348 1065173 start.go:496] detecting cgroup driver to use...
	I1119 22:58:13.217379 1065173 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:58:13.217471 1065173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:58:13.238224 1065173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:58:13.256281 1065173 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:58:13.256341 1065173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:58:13.275379 1065173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:58:13.295661 1065173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:58:13.443120 1065173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:58:13.638076 1065173 docker.go:234] disabling docker service ...
	I1119 22:58:13.638139 1065173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:58:13.673592 1065173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:58:13.691275 1065173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:58:13.843358 1065173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:58:14.004859 1065173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:58:14.022166 1065173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:58:14.038849 1065173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:58:14.039006 1065173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:14.049310 1065173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:58:14.049385 1065173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:14.059479 1065173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:14.069055 1065173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:14.079000 1065173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:58:14.088070 1065173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:14.099211 1065173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:14.115353 1065173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:14.125653 1065173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:58:14.135110 1065173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:58:14.144921 1065173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:58:14.311068 1065173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:58:14.483951 1065173 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:58:14.484021 1065173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:58:14.488192 1065173 start.go:564] Will wait 60s for crictl version
	I1119 22:58:14.488260 1065173 ssh_runner.go:195] Run: which crictl
	I1119 22:58:14.492193 1065173 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:58:14.534283 1065173 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:58:14.534366 1065173 ssh_runner.go:195] Run: crio --version
	I1119 22:58:14.579054 1065173 ssh_runner.go:195] Run: crio --version
	I1119 22:58:14.618955 1065173 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	
	
	==> CRI-O <==
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.299799082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.300112176Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d4457270dac4919e93446f82687efc72c276005818c0a52bee39a22c5cc7e037/merged/etc/passwd: no such file or directory"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.300205387Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d4457270dac4919e93446f82687efc72c276005818c0a52bee39a22c5cc7e037/merged/etc/group: no such file or directory"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.300533217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.320322952Z" level=info msg="Created container 43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3: kube-system/storage-provisioner/storage-provisioner" id=14719fac-9f9d-4f3c-9a00-82e95d4438a1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.327620771Z" level=info msg="Starting container: 43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3" id=9054787f-6213-4e78-abc9-418c06f99c27 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:57:52 no-preload-018508 crio[664]: time="2025-11-19T22:57:52.329467751Z" level=info msg="Started container" PID=1672 containerID=43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3 description=kube-system/storage-provisioner/storage-provisioner id=9054787f-6213-4e78-abc9-418c06f99c27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d0fa146b09e30889b96012d5023dd1df1fd989402be49619fcdc36de40c1002
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.292302235Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.298754854Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.298805521Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.298825206Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.321198116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.321248751Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.321272464Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.339590747Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.339625078Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.33964843Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.34544707Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.345484371Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.345506197Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.358472891Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.358678522Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.358804012Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.364505552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:58:02 no-preload-018508 crio[664]: time="2025-11-19T22:58:02.36468436Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	43a3a3b51a954       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago       Running             storage-provisioner         2                   8d0fa146b09e3       storage-provisioner                          kube-system
	adfda55cb43d5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   174d833a0f3cd       dashboard-metrics-scraper-6ffb444bf9-pzlsp   kubernetes-dashboard
	ec6ca133849d9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   bb3107e3c891e       kubernetes-dashboard-855c9754f9-hpp5l        kubernetes-dashboard
	ebd5a3d9d7d33       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   2c0bc1c18660b       busybox                                      default
	5f2f114727d3d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   57122ad585246       kindnet-2n4sq                                kube-system
	bbe29ffec8970       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   ebbec72a27c27       coredns-66bc5c9577-rxhmf                     kube-system
	d0968886aae29       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   8d0fa146b09e3       storage-provisioner                          kube-system
	f6cf0d00f87f7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   48d3a809c2045       kube-proxy-pn4pw                             kube-system
	8241a31d9950d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   633e4178e557a       kube-apiserver-no-preload-018508             kube-system
	1f88f376101bc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7a790a14b7538       kube-scheduler-no-preload-018508             kube-system
	0ca77d79cf856       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7489e2a2422c2       etcd-no-preload-018508                       kube-system
	0b7a4c896c79b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9429de71377f9       kube-controller-manager-no-preload-018508    kube-system
	
	
	==> coredns [bbe29ffec8970b6045f9d2a1c3b698f114f1d930f930f4bd1382f20f9cd47ab3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60262 - 35694 "HINFO IN 8982507046229991958.2596567006210547750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039453582s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-018508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-018508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-018508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_56_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:56:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-018508
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:58:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:57:51 +0000   Wed, 19 Nov 2025 22:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-018508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                09a9f2b2-499b-4381-b448-723471f1496f
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-rxhmf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-018508                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-2n4sq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-018508              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-018508     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-pn4pw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-018508              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pzlsp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hpp5l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 112s               kube-proxy       
	  Normal   Starting                 51s                kube-proxy       
	  Normal   NodeHasSufficientPID     118s               kubelet          Node no-preload-018508 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 118s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  118s               kubelet          Node no-preload-018508 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s               kubelet          Node no-preload-018508 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 118s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           114s               node-controller  Node no-preload-018508 event: Registered Node no-preload-018508 in Controller
	  Normal   NodeReady                99s                kubelet          Node no-preload-018508 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node no-preload-018508 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node no-preload-018508 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node no-preload-018508 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node no-preload-018508 event: Registered Node no-preload-018508 in Controller
	
	
	==> dmesg <==
	[Nov19 22:32] overlayfs: idmapped layers are currently not supported
	[Nov19 22:33] overlayfs: idmapped layers are currently not supported
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ca77d79cf856161bd0be76a8105a6df41331886a7f4141d07d1f01030d3b61f] <==
	{"level":"warn","ts":"2025-11-19T22:57:17.400873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.440069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.494675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.543441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.610852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.634588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.670071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.708890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.775219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.889832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:17.957737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.032257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.140507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.160003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.182031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.210968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.269838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.285283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.335055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.353082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.373015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.422411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.461083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.496691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:57:18.664026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49520","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:58:15 up  4:40,  0 user,  load average: 3.87, 3.18, 2.51
	Linux no-preload-018508 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f2f114727d3d059b9b8af978a47a4fe033228cef877486ac19ab0d8650c9bae] <==
	I1119 22:57:21.959917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:57:21.960152       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:57:21.960274       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:57:21.960291       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:57:21.960303       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:57:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:57:22.309057       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:57:22.309087       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:57:22.309098       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:57:22.309252       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:57:52.288880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:57:52.289081       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:57:52.310834       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 22:57:52.316598       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1119 22:57:53.710268       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:57:53.710388       1 metrics.go:72] Registering metrics
	I1119 22:57:53.710503       1 controller.go:711] "Syncing nftables rules"
	I1119 22:58:02.291281       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:58:02.292064       1 main.go:301] handling current node
	I1119 22:58:12.294930       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:58:12.294964       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8241a31d9950dc14faebfcacda6989082e73c6c340ebb79a4bb3a085a1f55c7d] <==
	I1119 22:57:20.717153       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:57:20.717197       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:57:20.735891       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 22:57:20.736011       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:57:20.737315       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 22:57:20.737453       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:57:20.737469       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:57:20.737478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:57:20.737484       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:57:20.738397       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:57:20.747508       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 22:57:20.752151       1 policy_source.go:240] refreshing policies
	I1119 22:57:20.761885       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1119 22:57:20.852017       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:57:20.853749       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:57:21.200002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:57:23.077777       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:57:23.352456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:57:23.527802       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:57:23.585045       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:57:23.848300       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.104.99"}
	I1119 22:57:23.889100       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.153.117"}
	I1119 22:57:24.756903       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:57:25.092294       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:57:25.276196       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0b7a4c896c79bb186fac3bb70f921c384772de224c83d9a472b103fbbdf54df1] <==
	I1119 22:57:24.697574       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:57:24.704744       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:57:24.711112       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:57:24.711199       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:57:24.711249       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:57:24.713058       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:57:24.715367       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:57:24.718073       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:57:24.718234       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:57:24.718350       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-018508"
	I1119 22:57:24.718430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:57:24.719455       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:57:24.744232       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 22:57:24.744322       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:57:24.744747       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:57:24.745198       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:57:24.750311       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:57:24.750339       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:57:24.750351       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:57:24.750370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:57:24.750467       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:57:24.757535       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:57:24.764522       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:57:24.764523       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:57:24.768541       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [f6cf0d00f87f79bf613dfd53ab0a83cea7be8070227f69030439ebba4e5d48f9] <==
	I1119 22:57:23.137021       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:57:23.448669       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:57:23.559118       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:57:23.567995       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:57:23.568074       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:57:23.961528       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:57:23.961639       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:57:24.025042       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:57:24.025461       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:57:24.025695       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:57:24.027390       1 config.go:200] "Starting service config controller"
	I1119 22:57:24.027461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:57:24.027522       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:57:24.027563       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:57:24.027601       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:57:24.027635       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:57:24.034033       1 config.go:309] "Starting node config controller"
	I1119 22:57:24.034110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:57:24.034119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:57:24.138972       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:57:24.139076       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:57:24.139102       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f88f376101bc7784fb6cf6c411ac17f97449b825be8d2877527a16b0886db08] <==
	I1119 22:57:18.753683       1 serving.go:386] Generated self-signed cert in-memory
	I1119 22:57:20.877996       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:57:20.878035       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:57:20.932786       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 22:57:20.932838       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 22:57:20.932872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:57:20.932880       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:57:20.932894       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:57:20.932908       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:57:20.938815       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:57:20.938930       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:57:21.035637       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 22:57:21.035762       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:57:21.042895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433197     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/efd9f3bc-bdd5-4976-b338-561dd5577ab9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hpp5l\" (UID: \"efd9f3bc-bdd5-4976-b338-561dd5577ab9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hpp5l"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433461     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/97951552-b1fa-440f-a46b-b695cd61cf8e-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-pzlsp\" (UID: \"97951552-b1fa-440f-a46b-b695cd61cf8e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433581     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xh9\" (UniqueName: \"kubernetes.io/projected/97951552-b1fa-440f-a46b-b695cd61cf8e-kube-api-access-66xh9\") pod \"dashboard-metrics-scraper-6ffb444bf9-pzlsp\" (UID: \"97951552-b1fa-440f-a46b-b695cd61cf8e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: I1119 22:57:25.433695     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbgg\" (UniqueName: \"kubernetes.io/projected/efd9f3bc-bdd5-4976-b338-561dd5577ab9-kube-api-access-5qbgg\") pod \"kubernetes-dashboard-855c9754f9-hpp5l\" (UID: \"efd9f3bc-bdd5-4976-b338-561dd5577ab9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hpp5l"
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: W1119 22:57:25.618073     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/crio-174d833a0f3cdd13fa31ab94d1968bb9979fd347d39a3298f7901e4ee33f5727 WatchSource:0}: Error finding container 174d833a0f3cdd13fa31ab94d1968bb9979fd347d39a3298f7901e4ee33f5727: Status 404 returned error can't find the container with id 174d833a0f3cdd13fa31ab94d1968bb9979fd347d39a3298f7901e4ee33f5727
	Nov 19 22:57:25 no-preload-018508 kubelet[784]: W1119 22:57:25.890822     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9259db4142ad89a464c825e8344ed69a1b5284d46ed1c85b643257415fe60a90/crio-bb3107e3c891ec2bf1e5c9b929d14156118e32c2178516e865f415fb993aba9f WatchSource:0}: Error finding container bb3107e3c891ec2bf1e5c9b929d14156118e32c2178516e865f415fb993aba9f: Status 404 returned error can't find the container with id bb3107e3c891ec2bf1e5c9b929d14156118e32c2178516e865f415fb993aba9f
	Nov 19 22:57:31 no-preload-018508 kubelet[784]: I1119 22:57:31.217317     784 scope.go:117] "RemoveContainer" containerID="0f46e56f93cef2d2d7062b0cbd29033e628a2fab858c3a6a721ad161d3ed66f7"
	Nov 19 22:57:32 no-preload-018508 kubelet[784]: I1119 22:57:32.226370     784 scope.go:117] "RemoveContainer" containerID="0f46e56f93cef2d2d7062b0cbd29033e628a2fab858c3a6a721ad161d3ed66f7"
	Nov 19 22:57:32 no-preload-018508 kubelet[784]: I1119 22:57:32.226652     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:32 no-preload-018508 kubelet[784]: E1119 22:57:32.226793     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:33 no-preload-018508 kubelet[784]: I1119 22:57:33.231085     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:33 no-preload-018508 kubelet[784]: E1119 22:57:33.231262     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:35 no-preload-018508 kubelet[784]: I1119 22:57:35.562785     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:35 no-preload-018508 kubelet[784]: E1119 22:57:35.563593     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:49 no-preload-018508 kubelet[784]: I1119 22:57:49.869879     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: I1119 22:57:50.274115     784 scope.go:117] "RemoveContainer" containerID="cb0141e545eeff5fd7fa46d806b26f9700927c568225bf34ed2a467d41a9492c"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: I1119 22:57:50.274460     784 scope.go:117] "RemoveContainer" containerID="adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: E1119 22:57:50.274642     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:57:50 no-preload-018508 kubelet[784]: I1119 22:57:50.298295     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hpp5l" podStartSLOduration=15.309457684 podStartE2EDuration="25.298277169s" podCreationTimestamp="2025-11-19 22:57:25 +0000 UTC" firstStartedPulling="2025-11-19 22:57:25.898146875 +0000 UTC m=+14.300325716" lastFinishedPulling="2025-11-19 22:57:35.88696636 +0000 UTC m=+24.289145201" observedRunningTime="2025-11-19 22:57:36.269058584 +0000 UTC m=+24.671237433" watchObservedRunningTime="2025-11-19 22:57:50.298277169 +0000 UTC m=+38.700456010"
	Nov 19 22:57:52 no-preload-018508 kubelet[784]: I1119 22:57:52.281206     784 scope.go:117] "RemoveContainer" containerID="d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3"
	Nov 19 22:57:55 no-preload-018508 kubelet[784]: I1119 22:57:55.563029     784 scope.go:117] "RemoveContainer" containerID="adfda55cb43d5ae3850ee45b2785118194a8242175ae87c346e27167473e9753"
	Nov 19 22:57:55 no-preload-018508 kubelet[784]: E1119 22:57:55.563220     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pzlsp_kubernetes-dashboard(97951552-b1fa-440f-a46b-b695cd61cf8e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pzlsp" podUID="97951552-b1fa-440f-a46b-b695cd61cf8e"
	Nov 19 22:58:09 no-preload-018508 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:58:09 no-preload-018508 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:58:09 no-preload-018508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ec6ca133849d9d4a8e95fe911a11e4e72d5669bcf700d711fffdd234c6450a3f] <==
	2025/11/19 22:57:35 Using namespace: kubernetes-dashboard
	2025/11/19 22:57:35 Using in-cluster config to connect to apiserver
	2025/11/19 22:57:35 Using secret token for csrf signing
	2025/11/19 22:57:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:57:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:57:35 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:57:35 Generating JWE encryption key
	2025/11/19 22:57:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:57:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:57:36 Initializing JWE encryption key from synchronized object
	2025/11/19 22:57:36 Creating in-cluster Sidecar client
	2025/11/19 22:57:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:57:36 Serving insecurely on HTTP port: 9090
	2025/11/19 22:58:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:57:35 Starting overwatch
	
	
	==> storage-provisioner [43a3a3b51a9541a9cb2fc277b700fdc8cf16e061434c4620ca8d767f490ffea3] <==
	I1119 22:57:52.343950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:57:52.355458       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:57:52.355517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:57:52.357646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:57:55.813416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:00.077738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:03.675748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:06.729602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:09.752493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:09.759888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:58:09.760041       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:58:09.760227       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-018508_c6c26e0c-aa2a-461e-a979-d285e8c5a46b!
	I1119 22:58:09.760858       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f174fb19-ca6a-48a2-8622-e239a84010c4", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-018508_c6c26e0c-aa2a-461e-a979-d285e8c5a46b became leader
	W1119 22:58:09.767768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:09.771124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:58:09.860605       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-018508_c6c26e0c-aa2a-461e-a979-d285e8c5a46b!
	W1119 22:58:11.775437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:11.781294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:13.784814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:13.789726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:15.794180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:58:15.807516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d0968886aae29c0b4c723dc154b09145ffa32993804364cc10d2e7a2a99cbfc3] <==
	I1119 22:57:21.857203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:57:51.859403       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018508 -n no-preload-018508
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018508 -n no-preload-018508: exit status 2 (450.565245ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-018508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.410223ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:59:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-044665 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-044665 describe deploy/metrics-server -n kube-system: exit status 1 (81.951105ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-044665 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-044665
helpers_test.go:243: (dbg) docker inspect embed-certs-044665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be",
	        "Created": "2025-11-19T22:58:06.768832725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1065618,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:58:06.830234731Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/hosts",
	        "LogPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be-json.log",
	        "Name": "/embed-certs-044665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-044665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-044665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be",
	                "LowerDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-044665",
	                "Source": "/var/lib/docker/volumes/embed-certs-044665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-044665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-044665",
	                "name.minikube.sigs.k8s.io": "embed-certs-044665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8a28790d596a3f4b06727eb4e2bd4743e650741298e47f77d1ed15f780b1ae9b",
	            "SandboxKey": "/var/run/docker/netns/8a28790d596a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-044665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:71:a5:ba:1a:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15bc9118c71b109d30f6317e5a328a97bacbdfe5f367a0001ea8dd4fc8a13fe9",
	                    "EndpointID": "6bd28e037ff6a52f811030a884c6d2757c1e0f28593537577cf5d637e3ca1555",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-044665",
	                        "c2d8d721c15d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-044665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-044665 logs -n 25: (1.382279474s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-110863                                                                                                                                                                                                                        │ cert-options-110863          │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-943214       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p old-k8s-version-191961 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                                                                                               │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                                                                                                    │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-553369                                                                                                                                                                                                               │ disable-driver-mounts-553369 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:58:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:58:20.444742 1068518 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:20.445260 1068518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:20.445295 1068518 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:20.445315 1068518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:20.445632 1068518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:58:20.446110 1068518 out.go:368] Setting JSON to false
	I1119 22:58:20.447106 1068518 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16829,"bootTime":1763576271,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:58:20.447205 1068518 start.go:143] virtualization:  
	I1119 22:58:20.453397 1068518 out.go:179] * [default-k8s-diff-port-841969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:58:20.456812 1068518 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:20.456888 1068518 notify.go:221] Checking for updates...
	I1119 22:58:20.464083 1068518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:20.467307 1068518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:58:20.470419 1068518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:58:20.473542 1068518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:58:20.476555 1068518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:20.483543 1068518 config.go:182] Loaded profile config "embed-certs-044665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:20.483707 1068518 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:20.531110 1068518 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:58:20.531242 1068518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:20.642353 1068518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:58:20.63128231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:20.642467 1068518 docker.go:319] overlay module found
	I1119 22:58:20.645730 1068518 out.go:179] * Using the docker driver based on user configuration
	I1119 22:58:20.648537 1068518 start.go:309] selected driver: docker
	I1119 22:58:20.648568 1068518 start.go:930] validating driver "docker" against <nil>
	I1119 22:58:20.648584 1068518 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:20.649368 1068518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:20.746439 1068518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:58:20.73545528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:20.746636 1068518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:58:20.746990 1068518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:20.750142 1068518 out.go:179] * Using Docker driver with root privileges
	I1119 22:58:20.753012 1068518 cni.go:84] Creating CNI manager for ""
	I1119 22:58:20.753086 1068518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:20.753102 1068518 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:58:20.753192 1068518 start.go:353] cluster config:
	{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:20.756440 1068518 out.go:179] * Starting "default-k8s-diff-port-841969" primary control-plane node in "default-k8s-diff-port-841969" cluster
	I1119 22:58:20.759270 1068518 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:58:20.762331 1068518 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:58:20.765320 1068518 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:20.765379 1068518 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 22:58:20.765402 1068518 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:20.765526 1068518 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 22:58:20.765543 1068518 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:20.765667 1068518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 22:58:20.765693 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json: {Name:mk2baab3e43eca41d665b1ba11e60ade6847b5ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:20.765909 1068518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:58:20.792559 1068518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:58:20.792586 1068518 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:58:20.792612 1068518 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:58:20.792647 1068518 start.go:360] acquireMachinesLock for default-k8s-diff-port-841969: {Name:mke5d323374b95cff07c96188997ebbdcf73748f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:20.792767 1068518 start.go:364] duration metric: took 99.825µs to acquireMachinesLock for "default-k8s-diff-port-841969"
	I1119 22:58:20.792813 1068518 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:58:20.792897 1068518 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:58:18.489545 1065173 out.go:252]   - Generating certificates and keys ...
	I1119 22:58:18.489672 1065173 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:58:18.489769 1065173 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:58:18.883573 1065173 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:58:19.751146 1065173 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:58:20.740199 1065173 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:58:20.796409 1068518 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:58:20.796721 1068518 start.go:159] libmachine.API.Create for "default-k8s-diff-port-841969" (driver="docker")
	I1119 22:58:20.796784 1068518 client.go:173] LocalClient.Create starting
	I1119 22:58:20.796894 1068518 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 22:58:20.796953 1068518 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:20.796984 1068518 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:20.797077 1068518 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 22:58:20.797133 1068518 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:20.797153 1068518 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:20.797705 1068518 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:58:20.824696 1068518 cli_runner.go:211] docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:58:20.824812 1068518 network_create.go:284] running [docker network inspect default-k8s-diff-port-841969] to gather additional debugging logs...
	I1119 22:58:20.824836 1068518 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969
	W1119 22:58:20.842395 1068518 cli_runner.go:211] docker network inspect default-k8s-diff-port-841969 returned with exit code 1
	I1119 22:58:20.842448 1068518 network_create.go:287] error running [docker network inspect default-k8s-diff-port-841969]: docker network inspect default-k8s-diff-port-841969: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-841969 not found
	I1119 22:58:20.842472 1068518 network_create.go:289] output of [docker network inspect default-k8s-diff-port-841969]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-841969 not found
	
	** /stderr **
	I1119 22:58:20.842598 1068518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:58:20.862297 1068518 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
	I1119 22:58:20.862754 1068518 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-409f9deb7199 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:64:cf:3b:93:91} reservation:<nil>}
	I1119 22:58:20.863285 1068518 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-601de6a5616d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:01:2f:20:8b:a3} reservation:<nil>}
	I1119 22:58:20.863632 1068518 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-15bc9118c71b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:ce:98:17:dc:31} reservation:<nil>}
	I1119 22:58:20.864233 1068518 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019779d0}
	I1119 22:58:20.864264 1068518 network_create.go:124] attempt to create docker network default-k8s-diff-port-841969 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:58:20.864342 1068518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 default-k8s-diff-port-841969
	I1119 22:58:20.934965 1068518 network_create.go:108] docker network default-k8s-diff-port-841969 192.168.85.0/24 created
	I1119 22:58:20.934995 1068518 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-841969" container
	I1119 22:58:20.935070 1068518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:58:20.952658 1068518 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-841969 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:58:20.970753 1068518 oci.go:103] Successfully created a docker volume default-k8s-diff-port-841969
	I1119 22:58:20.970848 1068518 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-841969-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --entrypoint /usr/bin/test -v default-k8s-diff-port-841969:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:58:21.669512 1068518 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-841969
	I1119 22:58:21.669581 1068518 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:21.669591 1068518 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:58:21.669667 1068518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-841969:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:58:21.382919 1065173 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:58:21.937531 1065173 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:58:21.938150 1065173 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-044665 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:58:22.527220 1065173 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:58:22.527363 1065173 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-044665 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:58:25.215965 1065173 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:58:25.350071 1065173 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:58:25.915178 1065173 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:58:25.915436 1065173 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:58:26.215793 1065173 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:58:26.998241 1065173 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:58:27.732391 1065173 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:58:28.762769 1065173 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:58:29.791923 1065173 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:58:29.792764 1065173 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:58:29.795390 1065173 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:58:26.574917 1068518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-841969:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.905174289s)
	I1119 22:58:26.574945 1068518 kic.go:203] duration metric: took 4.905351258s to extract preloaded images to volume ...
	W1119 22:58:26.575104 1068518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:58:26.575211 1068518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:58:26.651610 1068518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-841969 --name default-k8s-diff-port-841969 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --network default-k8s-diff-port-841969 --ip 192.168.85.2 --volume default-k8s-diff-port-841969:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:58:27.036257 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Running}}
	I1119 22:58:27.060479 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:58:27.089210 1068518 cli_runner.go:164] Run: docker exec default-k8s-diff-port-841969 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:58:27.148315 1068518 oci.go:144] the created container "default-k8s-diff-port-841969" has a running status.
	I1119 22:58:27.148346 1068518 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa...
	I1119 22:58:27.569726 1068518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:58:27.599280 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:58:27.641593 1068518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:58:27.641618 1068518 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-841969 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:58:27.725505 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:58:27.752346 1068518 machine.go:94] provisionDockerMachine start ...
	I1119 22:58:27.752450 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:27.791275 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:27.791618 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:27.791628 1068518 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:58:27.792248 1068518 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:58:29.798900 1065173 out.go:252]   - Booting up control plane ...
	I1119 22:58:29.799022 1065173 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:58:29.799117 1065173 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:58:29.799200 1065173 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:58:29.816865 1065173 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:58:29.816980 1065173 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:58:29.825212 1065173 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:58:29.825618 1065173 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:58:29.825677 1065173 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:58:29.964391 1065173 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:58:29.964515 1065173 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:58:30.983571 1065173 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.012424207s
	I1119 22:58:30.983886 1065173 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:58:30.983977 1065173 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 22:58:30.984070 1065173 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:58:30.984153 1065173 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:58:30.950750 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 22:58:30.950781 1068518 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-841969"
	I1119 22:58:30.950888 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:30.975531 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:30.975935 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:30.975964 1068518 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-841969 && echo "default-k8s-diff-port-841969" | sudo tee /etc/hostname
	I1119 22:58:31.137351 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 22:58:31.137525 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:31.155184 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:31.155507 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:31.155531 1068518 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-841969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-841969/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-841969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:58:31.303308 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:58:31.303340 1068518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:58:31.303372 1068518 ubuntu.go:190] setting up certificates
	I1119 22:58:31.303382 1068518 provision.go:84] configureAuth start
	I1119 22:58:31.303445 1068518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 22:58:31.321370 1068518 provision.go:143] copyHostCerts
	I1119 22:58:31.321442 1068518 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:58:31.321455 1068518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:58:31.321535 1068518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:58:31.321640 1068518 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:58:31.321653 1068518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:58:31.321683 1068518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:58:31.321735 1068518 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:58:31.321746 1068518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:58:31.321771 1068518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:58:31.321860 1068518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-841969 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-841969 localhost minikube]
	I1119 22:58:31.866692 1068518 provision.go:177] copyRemoteCerts
	I1119 22:58:31.866773 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:58:31.866824 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:31.886895 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.003595 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:58:32.035965 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:58:32.065533 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:58:32.104140 1068518 provision.go:87] duration metric: took 800.733678ms to configureAuth
	I1119 22:58:32.104208 1068518 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:58:32.104413 1068518 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:32.104559 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.128525 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:32.128834 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:32.128850 1068518 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:58:32.519213 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:58:32.519282 1068518 machine.go:97] duration metric: took 4.766909664s to provisionDockerMachine
	I1119 22:58:32.519310 1068518 client.go:176] duration metric: took 11.722514024s to LocalClient.Create
	I1119 22:58:32.519343 1068518 start.go:167] duration metric: took 11.722623023s to libmachine.API.Create "default-k8s-diff-port-841969"
	I1119 22:58:32.519380 1068518 start.go:293] postStartSetup for "default-k8s-diff-port-841969" (driver="docker")
	I1119 22:58:32.519406 1068518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:58:32.519503 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:58:32.519587 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.552878 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.677329 1068518 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:58:32.681354 1068518 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:58:32.681381 1068518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:58:32.681393 1068518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:58:32.681446 1068518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:58:32.681520 1068518 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:58:32.681620 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:58:32.694607 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:58:32.723807 1068518 start.go:296] duration metric: took 204.395932ms for postStartSetup
	I1119 22:58:32.724242 1068518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 22:58:32.751974 1068518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 22:58:32.752250 1068518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:58:32.752293 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.790627 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.905805 1068518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:58:32.912709 1068518 start.go:128] duration metric: took 12.11979618s to createHost
	I1119 22:58:32.912731 1068518 start.go:83] releasing machines lock for "default-k8s-diff-port-841969", held for 12.119946023s
	I1119 22:58:32.912811 1068518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 22:58:32.939224 1068518 ssh_runner.go:195] Run: cat /version.json
	I1119 22:58:32.939279 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.939548 1068518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:58:32.939607 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.980452 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.987101 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:33.206755 1068518 ssh_runner.go:195] Run: systemctl --version
	I1119 22:58:33.215722 1068518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:58:33.297569 1068518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:58:33.302047 1068518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:58:33.302151 1068518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:58:33.345919 1068518 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:58:33.345960 1068518 start.go:496] detecting cgroup driver to use...
	I1119 22:58:33.346018 1068518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:58:33.346092 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:58:33.371137 1068518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:58:33.393584 1068518 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:58:33.393672 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:58:33.421153 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:58:33.451558 1068518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:58:33.653858 1068518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:58:33.874546 1068518 docker.go:234] disabling docker service ...
	I1119 22:58:33.874665 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:58:33.919152 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:58:33.939327 1068518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:58:34.153149 1068518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:58:34.364650 1068518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:58:34.389599 1068518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:58:34.415364 1068518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:58:34.415457 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.429323 1068518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:58:34.429404 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.443353 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.453496 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.473469 1068518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:58:34.487334 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.500108 1068518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.525760 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.541474 1068518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:58:34.553119 1068518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:58:34.564560 1068518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:58:34.771517 1068518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:58:34.998581 1068518 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:58:34.998698 1068518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:58:35.004589 1068518 start.go:564] Will wait 60s for crictl version
	I1119 22:58:35.004766 1068518 ssh_runner.go:195] Run: which crictl
	I1119 22:58:35.011944 1068518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:58:35.048798 1068518 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:58:35.048957 1068518 ssh_runner.go:195] Run: crio --version
	I1119 22:58:35.100708 1068518 ssh_runner.go:195] Run: crio --version
	I1119 22:58:35.153672 1068518 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:58:35.156374 1068518 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:58:35.181308 1068518 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:58:35.185769 1068518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:58:35.195431 1068518 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:58:35.195579 1068518 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:35.195634 1068518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:58:35.254550 1068518 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:58:35.254630 1068518 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:58:35.254719 1068518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:58:35.300748 1068518 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:58:35.300768 1068518 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:58:35.300775 1068518 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 22:58:35.300856 1068518 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-841969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:58:35.300934 1068518 ssh_runner.go:195] Run: crio config
	I1119 22:58:35.415297 1068518 cni.go:84] Creating CNI manager for ""
	I1119 22:58:35.415361 1068518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:35.415397 1068518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:58:35.415452 1068518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-841969 NodeName:default-k8s-diff-port-841969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:58:35.415607 1068518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-841969"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:58:35.415698 1068518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:58:35.423726 1068518 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:58:35.423836 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:58:35.431531 1068518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 22:58:35.444763 1068518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:58:35.459032 1068518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 22:58:35.472345 1068518 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:58:35.476634 1068518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:58:35.486173 1068518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:58:35.664874 1068518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:58:35.695274 1068518 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969 for IP: 192.168.85.2
	I1119 22:58:35.695337 1068518 certs.go:195] generating shared ca certs ...
	I1119 22:58:35.695370 1068518 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:35.695543 1068518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 22:58:35.695620 1068518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 22:58:35.695657 1068518 certs.go:257] generating profile certs ...
	I1119 22:58:35.695765 1068518 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key
	I1119 22:58:35.695801 1068518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt with IP's: []
	I1119 22:58:35.890542 1068518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt ...
	I1119 22:58:35.890616 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: {Name:mkffbab5d69d49b454a8bb9ea3dfaa425d14dc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:35.890889 1068518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key ...
	I1119 22:58:35.890930 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key: {Name:mk92fd4ce07959e3e9b1fc2ae6270f3aa98de476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:35.891108 1068518 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d
	I1119 22:58:35.891150 1068518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:58:36.210323 1068518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d ...
	I1119 22:58:36.210409 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d: {Name:mk616dc652f9c93daf58d458da12806b2bd611c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.210649 1068518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d ...
	I1119 22:58:36.210688 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d: {Name:mk3ea2ede1bbcbffaa4e47e9d48fccb357efb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.210837 1068518 certs.go:382] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt
	I1119 22:58:36.210993 1068518 certs.go:386] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key
	I1119 22:58:36.211111 1068518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key
	I1119 22:58:36.211152 1068518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt with IP's: []
	I1119 22:58:36.788484 1068518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt ...
	I1119 22:58:36.788560 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt: {Name:mk9a13cd4ae2f3f341f69c86664fbf0f64b8630a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.788789 1068518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key ...
	I1119 22:58:36.788826 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key: {Name:mkd69c7d076ca3e8ee440f04afc872389e5ede7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.789092 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 22:58:36.789162 1068518 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 22:58:36.789204 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:58:36.789258 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:58:36.789320 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:58:36.789373 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 22:58:36.789459 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:58:36.790132 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:58:36.829704 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:58:36.862891 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:58:36.894733 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 22:58:36.921163 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:58:36.939430 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:58:36.958132 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:58:36.976169 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:58:37.010163 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 22:58:37.049003 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:58:37.077773 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 22:58:37.105853 1068518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:58:37.123665 1068518 ssh_runner.go:195] Run: openssl version
	I1119 22:58:37.132181 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 22:58:37.142122 1068518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 22:58:37.146653 1068518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 22:58:37.146717 1068518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 22:58:37.194704 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:58:37.203314 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:58:37.211661 1068518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:58:37.216382 1068518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:58:37.216491 1068518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:58:37.258518 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:58:37.272245 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 22:58:37.281155 1068518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 22:58:37.291276 1068518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 22:58:37.291385 1068518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 22:58:37.338095 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 22:58:37.346562 1068518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:58:37.351225 1068518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:58:37.351326 1068518 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:37.351444 1068518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:58:37.351532 1068518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:58:37.395274 1068518 cri.go:89] found id: ""
	I1119 22:58:37.395397 1068518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:58:37.407418 1068518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:58:37.418268 1068518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:58:37.418427 1068518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:58:37.430242 1068518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:58:37.430310 1068518 kubeadm.go:158] found existing configuration files:
	
	I1119 22:58:37.430394 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:58:37.443171 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:58:37.443289 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:58:37.456752 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:58:37.468517 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:58:37.468584 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:58:37.480168 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:58:37.490460 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:58:37.490525 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:58:37.498295 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:58:37.506666 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:58:37.506730 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:58:37.514239 1068518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:58:37.637215 1068518 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:58:37.637278 1068518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:58:37.668071 1068518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:58:37.668150 1068518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:58:37.668194 1068518 kubeadm.go:319] OS: Linux
	I1119 22:58:37.668246 1068518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:58:37.668300 1068518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:58:37.668357 1068518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:58:37.668412 1068518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:58:37.668467 1068518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:58:37.668521 1068518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:58:37.668573 1068518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:58:37.668627 1068518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:58:37.668680 1068518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:58:37.782514 1068518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:58:37.782652 1068518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:58:37.782756 1068518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:58:37.794112 1068518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:58:37.800614 1068518 out.go:252]   - Generating certificates and keys ...
	I1119 22:58:37.800717 1068518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:58:37.800791 1068518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:58:38.070272 1068518 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:58:39.100955 1068518 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:58:39.419074 1068518 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:58:39.830382 1068518 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:58:40.118594 1068518 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:58:40.120338 1068518 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-841969 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:58:37.313915 1065173 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.329606547s
	I1119 22:58:38.717095 1065173 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.733286927s
	I1119 22:58:40.986162 1065173 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002308437s
	I1119 22:58:41.033446 1065173 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:58:41.055820 1065173 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:58:41.079783 1065173 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:58:41.079990 1065173 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-044665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:58:41.097985 1065173 kubeadm.go:319] [bootstrap-token] Using token: sii85k.kns7hdnytojwdni2
	I1119 22:58:41.100932 1065173 out.go:252]   - Configuring RBAC rules ...
	I1119 22:58:41.101060 1065173 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:58:41.110930 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:58:41.136983 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:58:41.147227 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:58:41.147366 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:58:41.153774 1065173 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:58:41.396742 1065173 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:58:41.879405 1065173 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:58:42.402984 1065173 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:58:42.404555 1065173 kubeadm.go:319] 
	I1119 22:58:42.404634 1065173 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:58:42.404647 1065173 kubeadm.go:319] 
	I1119 22:58:42.404729 1065173 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:58:42.404740 1065173 kubeadm.go:319] 
	I1119 22:58:42.404767 1065173 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:58:42.405200 1065173 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:58:42.405271 1065173 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:58:42.405281 1065173 kubeadm.go:319] 
	I1119 22:58:42.405338 1065173 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:58:42.405347 1065173 kubeadm.go:319] 
	I1119 22:58:42.405397 1065173 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:58:42.405405 1065173 kubeadm.go:319] 
	I1119 22:58:42.405459 1065173 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:58:42.405543 1065173 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:58:42.405620 1065173 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:58:42.405631 1065173 kubeadm.go:319] 
	I1119 22:58:42.405903 1065173 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:58:42.405995 1065173 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:58:42.406006 1065173 kubeadm.go:319] 
	I1119 22:58:42.406281 1065173 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sii85k.kns7hdnytojwdni2 \
	I1119 22:58:42.406422 1065173 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 22:58:42.406607 1065173 kubeadm.go:319] 	--control-plane 
	I1119 22:58:42.406620 1065173 kubeadm.go:319] 
	I1119 22:58:42.406898 1065173 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:58:42.406911 1065173 kubeadm.go:319] 
	I1119 22:58:42.407236 1065173 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sii85k.kns7hdnytojwdni2 \
	I1119 22:58:42.407526 1065173 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 22:58:42.414210 1065173 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:58:42.414481 1065173 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:58:42.414619 1065173 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:58:42.414645 1065173 cni.go:84] Creating CNI manager for ""
	I1119 22:58:42.414652 1065173 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:42.419589 1065173 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:58:40.757513 1068518 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:58:40.757767 1068518 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-841969 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:58:41.811507 1068518 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:58:42.592538 1068518 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:58:43.134595 1068518 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:58:43.138801 1068518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:58:43.979326 1068518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:58:44.332916 1068518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:58:44.638545 1068518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:58:44.831966 1068518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:58:45.944374 1068518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:58:45.945118 1068518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:58:45.955212 1068518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:58:42.422462 1065173 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:58:42.428083 1065173 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:58:42.428102 1065173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:58:42.452739 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:58:42.941161 1065173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:58:42.941316 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:42.941400 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044665 minikube.k8s.io/updated_at=2025_11_19T22_58_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=embed-certs-044665 minikube.k8s.io/primary=true
	I1119 22:58:43.200253 1065173 ops.go:34] apiserver oom_adj: -16
	I1119 22:58:43.200350 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:43.700453 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:44.200456 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:44.700880 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:45.200517 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:45.700820 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:46.200898 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:46.700474 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:46.991033 1065173 kubeadm.go:1114] duration metric: took 4.049761856s to wait for elevateKubeSystemPrivileges
	I1119 22:58:46.991062 1065173 kubeadm.go:403] duration metric: took 28.783233471s to StartCluster
	I1119 22:58:46.991080 1065173 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:46.991147 1065173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:58:46.992226 1065173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:46.992438 1065173 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:58:46.992579 1065173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:58:46.992844 1065173 config.go:182] Loaded profile config "embed-certs-044665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:46.992877 1065173 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:58:46.992941 1065173 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-044665"
	I1119 22:58:46.992962 1065173 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-044665"
	I1119 22:58:46.992983 1065173 host.go:66] Checking if "embed-certs-044665" exists ...
	I1119 22:58:46.993472 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:46.995524 1065173 addons.go:70] Setting default-storageclass=true in profile "embed-certs-044665"
	I1119 22:58:46.995559 1065173 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044665"
	I1119 22:58:46.995936 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:46.996245 1065173 out.go:179] * Verifying Kubernetes components...
	I1119 22:58:47.000444 1065173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:58:47.034563 1065173 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:58:47.040842 1065173 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:58:47.040870 1065173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:58:47.040956 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:47.055081 1065173 addons.go:239] Setting addon default-storageclass=true in "embed-certs-044665"
	I1119 22:58:47.055135 1065173 host.go:66] Checking if "embed-certs-044665" exists ...
	I1119 22:58:47.055609 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:47.087434 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:47.091068 1065173 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:58:47.091095 1065173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:58:47.091177 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:47.123676 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:47.734027 1065173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:58:47.734275 1065173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:58:47.769665 1065173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:58:47.808284 1065173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:58:48.404601 1065173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044665" to be "Ready" ...
	I1119 22:58:48.405731 1065173 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:58:48.773222 1065173 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:58:45.958744 1068518 out.go:252]   - Booting up control plane ...
	I1119 22:58:45.958852 1068518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:58:45.967395 1068518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:58:45.969505 1068518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:58:46.015700 1068518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:58:46.015816 1068518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:58:46.038171 1068518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:58:46.038488 1068518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:58:46.038645 1068518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:58:46.206571 1068518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:58:46.206697 1068518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:58:48.210264 1068518 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001601672s
	I1119 22:58:48.211529 1068518 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:58:48.211627 1068518 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1119 22:58:48.211873 1068518 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:58:48.212075 1068518 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:58:48.776091 1065173 addons.go:515] duration metric: took 1.783187506s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:58:48.910305 1065173 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-044665" context rescaled to 1 replicas
	W1119 22:58:50.407919 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:58:52.197771 1068518 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.985559334s
	I1119 22:58:54.377249 1068518 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.164776642s
	I1119 22:58:54.713963 1068518 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502144601s
	I1119 22:58:54.742749 1068518 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:58:54.767848 1068518 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:58:54.786909 1068518 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:58:54.787137 1068518 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-841969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:58:54.803803 1068518 kubeadm.go:319] [bootstrap-token] Using token: a8tgfv.98xha8e3gtrfgpvq
	I1119 22:58:54.806774 1068518 out.go:252]   - Configuring RBAC rules ...
	I1119 22:58:54.806953 1068518 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:58:54.814112 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:58:54.833020 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:58:54.838734 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:58:54.845274 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:58:54.850363 1068518 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:58:55.120774 1068518 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:58:55.561258 1068518 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:58:56.120975 1068518 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:58:56.122195 1068518 kubeadm.go:319] 
	I1119 22:58:56.122269 1068518 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:58:56.122276 1068518 kubeadm.go:319] 
	I1119 22:58:56.122356 1068518 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:58:56.122360 1068518 kubeadm.go:319] 
	I1119 22:58:56.122387 1068518 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:58:56.122449 1068518 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:58:56.122501 1068518 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:58:56.122506 1068518 kubeadm.go:319] 
	I1119 22:58:56.122562 1068518 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:58:56.122567 1068518 kubeadm.go:319] 
	I1119 22:58:56.122616 1068518 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:58:56.122621 1068518 kubeadm.go:319] 
	I1119 22:58:56.122675 1068518 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:58:56.122753 1068518 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:58:56.122824 1068518 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:58:56.122829 1068518 kubeadm.go:319] 
	I1119 22:58:56.122943 1068518 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:58:56.123033 1068518 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:58:56.123038 1068518 kubeadm.go:319] 
	I1119 22:58:56.123126 1068518 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token a8tgfv.98xha8e3gtrfgpvq \
	I1119 22:58:56.123234 1068518 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 22:58:56.123255 1068518 kubeadm.go:319] 	--control-plane 
	I1119 22:58:56.123260 1068518 kubeadm.go:319] 
	I1119 22:58:56.123348 1068518 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:58:56.123353 1068518 kubeadm.go:319] 
	I1119 22:58:56.123438 1068518 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token a8tgfv.98xha8e3gtrfgpvq \
	I1119 22:58:56.123544 1068518 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 22:58:56.127702 1068518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:58:56.127937 1068518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:58:56.128052 1068518 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:58:56.128071 1068518 cni.go:84] Creating CNI manager for ""
	I1119 22:58:56.128082 1068518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:56.131227 1068518 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1119 22:58:52.408031 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:58:54.907856 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:58:56.134145 1068518 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:58:56.138535 1068518 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:58:56.138595 1068518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:58:56.154457 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:58:56.500372 1068518 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:58:56.500526 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:56.500621 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-841969 minikube.k8s.io/updated_at=2025_11_19T22_58_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-841969 minikube.k8s.io/primary=true
	I1119 22:58:56.779174 1068518 ops.go:34] apiserver oom_adj: -16
	I1119 22:58:56.779278 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:57.280053 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:57.779636 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:58.280063 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:58.779741 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:59.279700 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:59.780119 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:59:00.280057 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:59:00.433252 1068518 kubeadm.go:1114] duration metric: took 3.932780413s to wait for elevateKubeSystemPrivileges
	I1119 22:59:00.433282 1068518 kubeadm.go:403] duration metric: took 23.081960035s to StartCluster
	I1119 22:59:00.433299 1068518 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:00.433365 1068518 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:59:00.435187 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:00.435708 1068518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:59:00.435711 1068518 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:00.436017 1068518 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:00.436067 1068518 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:59:00.436130 1068518 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-841969"
	I1119 22:59:00.436148 1068518 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-841969"
	I1119 22:59:00.436172 1068518 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 22:59:00.436375 1068518 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-841969"
	I1119 22:59:00.436419 1068518 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-841969"
	I1119 22:59:00.436682 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:59:00.436797 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:59:00.439202 1068518 out.go:179] * Verifying Kubernetes components...
	I1119 22:59:00.443120 1068518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:00.479639 1068518 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1119 22:58:56.908058 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:58:59.408214 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:59:00.482374 1068518 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:59:00.482395 1068518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:59:00.482465 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:59:00.485747 1068518 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-841969"
	I1119 22:59:00.485794 1068518 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 22:59:00.486238 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:59:00.531020 1068518 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:59:00.531043 1068518 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:59:00.531123 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:59:00.551282 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:59:00.568367 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:59:00.888472 1068518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:59:00.888587 1068518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:00.897763 1068518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:59:00.974170 1068518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:59:01.514177 1068518 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:59:01.514897 1068518 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-841969" to be "Ready" ...
	I1119 22:59:01.943346 1068518 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:59:01.946313 1068518 addons.go:515] duration metric: took 1.510225713s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:59:02.020332 1068518 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-841969" context rescaled to 1 replicas
	W1119 22:59:03.518359 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:01.409237 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:03.908019 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:05.908326 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:05.518672 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:08.019092 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:08.408193 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:10.908157 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:10.518559 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:13.018807 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:15.019752 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:12.908322 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:14.908392 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:17.517820 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:19.518187 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:17.407779 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:19.408573 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:21.518332 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:24.017718 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:21.908146 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:24.408542 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:26.017985 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:28.518403 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:26.908153 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:59:28.413147 1065173 node_ready.go:49] node "embed-certs-044665" is "Ready"
	I1119 22:59:28.413181 1065173 node_ready.go:38] duration metric: took 40.008497398s for node "embed-certs-044665" to be "Ready" ...
	I1119 22:59:28.413195 1065173 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:59:28.413250 1065173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:59:28.430099 1065173 api_server.go:72] duration metric: took 41.43762575s to wait for apiserver process to appear ...
	I1119 22:59:28.430125 1065173 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:59:28.430144 1065173 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:59:28.440450 1065173 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:59:28.446200 1065173 api_server.go:141] control plane version: v1.34.1
	I1119 22:59:28.446229 1065173 api_server.go:131] duration metric: took 16.096292ms to wait for apiserver health ...
	I1119 22:59:28.446239 1065173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:59:28.451023 1065173 system_pods.go:59] 8 kube-system pods found
	I1119 22:59:28.451060 1065173 system_pods.go:61] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:28.451067 1065173 system_pods.go:61] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:28.451073 1065173 system_pods.go:61] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:28.451079 1065173 system_pods.go:61] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:28.451085 1065173 system_pods.go:61] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:28.451091 1065173 system_pods.go:61] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:28.451095 1065173 system_pods.go:61] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:28.451101 1065173 system_pods.go:61] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:28.451110 1065173 system_pods.go:74] duration metric: took 4.862479ms to wait for pod list to return data ...
	I1119 22:59:28.451118 1065173 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:59:28.454135 1065173 default_sa.go:45] found service account: "default"
	I1119 22:59:28.454157 1065173 default_sa.go:55] duration metric: took 3.03296ms for default service account to be created ...
	I1119 22:59:28.454171 1065173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:59:28.459699 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:28.459729 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:28.459735 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:28.459742 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:28.459746 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:28.459751 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:28.459754 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:28.459758 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:28.459764 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:28.459783 1065173 retry.go:31] will retry after 273.81481ms: missing components: kube-dns
	I1119 22:59:28.737999 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:28.738047 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:28.738056 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:28.738063 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:28.738067 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:28.738072 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:28.738076 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:28.738080 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:28.738092 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:28.738106 1065173 retry.go:31] will retry after 323.983424ms: missing components: kube-dns
	I1119 22:59:29.065724 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:29.065763 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:29.065770 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:29.065777 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:29.065781 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:29.065785 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:29.065790 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:29.065794 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:29.065800 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:29.065814 1065173 retry.go:31] will retry after 478.611601ms: missing components: kube-dns
	I1119 22:59:29.548836 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:29.548869 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Running
	I1119 22:59:29.548877 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:29.548882 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:29.548886 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:29.548891 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:29.548895 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:29.548900 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:29.548905 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Running
	I1119 22:59:29.548912 1065173 system_pods.go:126] duration metric: took 1.094736434s to wait for k8s-apps to be running ...
	I1119 22:59:29.548925 1065173 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:59:29.548983 1065173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:59:29.563406 1065173 system_svc.go:56] duration metric: took 14.470293ms WaitForService to wait for kubelet
	I1119 22:59:29.563476 1065173 kubeadm.go:587] duration metric: took 42.571006676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:59:29.563502 1065173 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:59:29.566546 1065173 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:59:29.566581 1065173 node_conditions.go:123] node cpu capacity is 2
	I1119 22:59:29.566611 1065173 node_conditions.go:105] duration metric: took 3.101399ms to run NodePressure ...
	I1119 22:59:29.566624 1065173 start.go:242] waiting for startup goroutines ...
	I1119 22:59:29.566662 1065173 start.go:247] waiting for cluster config update ...
	I1119 22:59:29.566682 1065173 start.go:256] writing updated cluster config ...
	I1119 22:59:29.567050 1065173 ssh_runner.go:195] Run: rm -f paused
	I1119 22:59:29.570694 1065173 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:59:29.574950 1065173 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.579581 1065173 pod_ready.go:94] pod "coredns-66bc5c9577-kcs7v" is "Ready"
	I1119 22:59:29.579612 1065173 pod_ready.go:86] duration metric: took 4.632389ms for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.581649 1065173 pod_ready.go:83] waiting for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.585839 1065173 pod_ready.go:94] pod "etcd-embed-certs-044665" is "Ready"
	I1119 22:59:29.585864 1065173 pod_ready.go:86] duration metric: took 4.190737ms for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.588310 1065173 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.592751 1065173 pod_ready.go:94] pod "kube-apiserver-embed-certs-044665" is "Ready"
	I1119 22:59:29.592779 1065173 pod_ready.go:86] duration metric: took 4.442653ms for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.595312 1065173 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.975823 1065173 pod_ready.go:94] pod "kube-controller-manager-embed-certs-044665" is "Ready"
	I1119 22:59:29.975855 1065173 pod_ready.go:86] duration metric: took 380.517091ms for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:30.175585 1065173 pod_ready.go:83] waiting for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:30.574950 1065173 pod_ready.go:94] pod "kube-proxy-w5t4l" is "Ready"
	I1119 22:59:30.574980 1065173 pod_ready.go:86] duration metric: took 399.326908ms for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:30.775683 1065173 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:31.176314 1065173 pod_ready.go:94] pod "kube-scheduler-embed-certs-044665" is "Ready"
	I1119 22:59:31.176345 1065173 pod_ready.go:86] duration metric: took 400.63526ms for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:31.176359 1065173 pod_ready.go:40] duration metric: took 1.605630292s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:59:31.232055 1065173 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:59:31.235708 1065173 out.go:179] * Done! kubectl is now configured to use "embed-certs-044665" cluster and "default" namespace by default
	W1119 22:59:30.518438 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:33.017665 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:35.017878 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:37.019468 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:39.517749 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 22:59:28 embed-certs-044665 crio[839]: time="2025-11-19T22:59:28.463657455Z" level=info msg="Created container 2d5ba3d022a5a670a2f163415ba4aefd0afee816cf5d675219377fba944cab0c: kube-system/coredns-66bc5c9577-kcs7v/coredns" id=4ba40c89-8cc1-496c-8031-4c93d3351a55 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:59:28 embed-certs-044665 crio[839]: time="2025-11-19T22:59:28.467416402Z" level=info msg="Starting container: 2d5ba3d022a5a670a2f163415ba4aefd0afee816cf5d675219377fba944cab0c" id=955df4cc-4603-4e3d-8e45-e6a4d0c9a1a2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:59:28 embed-certs-044665 crio[839]: time="2025-11-19T22:59:28.47713858Z" level=info msg="Started container" PID=1761 containerID=2d5ba3d022a5a670a2f163415ba4aefd0afee816cf5d675219377fba944cab0c description=kube-system/coredns-66bc5c9577-kcs7v/coredns id=955df4cc-4603-4e3d-8e45-e6a4d0c9a1a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b594ff1ab07f1114589de58218df92ab1c68fb6c41cdec76ceba04ec16b531a2
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.765896605Z" level=info msg="Running pod sandbox: default/busybox/POD" id=270c548c-3616-4c2a-ab10-d9186352a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.765977295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.775158004Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:94b9cea5223e9c73282b6c2adad542ba1171f6842227fa0af58512716dd500d7 UID:e2c91413-2762-471a-bbcc-cb2b7e0ac3fc NetNS:/var/run/netns/b28c136f-d77f-4a8b-81ba-f18380d9452a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d150}] Aliases:map[]}"
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.775195436Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.790710226Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:94b9cea5223e9c73282b6c2adad542ba1171f6842227fa0af58512716dd500d7 UID:e2c91413-2762-471a-bbcc-cb2b7e0ac3fc NetNS:/var/run/netns/b28c136f-d77f-4a8b-81ba-f18380d9452a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d150}] Aliases:map[]}"
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.791075849Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.794340868Z" level=info msg="Ran pod sandbox 94b9cea5223e9c73282b6c2adad542ba1171f6842227fa0af58512716dd500d7 with infra container: default/busybox/POD" id=270c548c-3616-4c2a-ab10-d9186352a6b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.795931584Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0523214a-aa3e-43c0-9663-8c0588ab763d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.796212727Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0523214a-aa3e-43c0-9663-8c0588ab763d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.796355563Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0523214a-aa3e-43c0-9663-8c0588ab763d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.797493615Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a591857-ed3b-4c93-a1a2-87d32348df48 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:59:31 embed-certs-044665 crio[839]: time="2025-11-19T22:59:31.800921786Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.015815072Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4a591857-ed3b-4c93-a1a2-87d32348df48 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.0165347Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c207abe-f768-4b1d-ab2a-147d39ca5854 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.019996816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=69ee4896-ccc2-4232-8c76-f9ea511045e1 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.028355758Z" level=info msg="Creating container: default/busybox/busybox" id=4d82067d-668c-4ebb-b965-ee3c567aa88a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.028512043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.034034425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.034718107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.052711366Z" level=info msg="Created container f9eec6949095f7b4062d776066102acd0f795bcdc5e60e02cc64e030fef6fcd1: default/busybox/busybox" id=4d82067d-668c-4ebb-b965-ee3c567aa88a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.054026576Z" level=info msg="Starting container: f9eec6949095f7b4062d776066102acd0f795bcdc5e60e02cc64e030fef6fcd1" id=702036a6-dac2-4628-ade6-b6e8104d4409 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:59:34 embed-certs-044665 crio[839]: time="2025-11-19T22:59:34.056230483Z" level=info msg="Started container" PID=1824 containerID=f9eec6949095f7b4062d776066102acd0f795bcdc5e60e02cc64e030fef6fcd1 description=default/busybox/busybox id=702036a6-dac2-4628-ade6-b6e8104d4409 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94b9cea5223e9c73282b6c2adad542ba1171f6842227fa0af58512716dd500d7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f9eec6949095f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   94b9cea5223e9       busybox                                      default
	2d5ba3d022a5a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   b594ff1ab07f1       coredns-66bc5c9577-kcs7v                     kube-system
	4800d4da27a9e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   d5cb5cd48d91c       storage-provisioner                          kube-system
	0e3d31a6c9e3e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   2b8c477c6111f       kindnet-bphl7                                kube-system
	43d559f286200       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   73669ccd2208d       kube-proxy-w5t4l                             kube-system
	c342ca4974b31       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   e8364599e2125       kube-controller-manager-embed-certs-044665   kube-system
	6c5c5f02c5715       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   01003aa77e3d4       etcd-embed-certs-044665                      kube-system
	58c9086ddc92d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   0bc360d15739b       kube-scheduler-embed-certs-044665            kube-system
	13a5620a7e211       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   91a0a8ea066ff       kube-apiserver-embed-certs-044665            kube-system
	
	
	==> coredns [2d5ba3d022a5a670a2f163415ba4aefd0afee816cf5d675219377fba944cab0c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60556 - 37733 "HINFO IN 5628125507273837033.6338526848738177424. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017090827s
	
	
	==> describe nodes <==
	Name:               embed-certs-044665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-044665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-044665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_58_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:58:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-044665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:59:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:59:28 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:59:28 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:59:28 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:59:28 +0000   Wed, 19 Nov 2025 22:59:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-044665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                f8def6c5-4626-4320-af5a-5122b8c6bdf4
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-kcs7v                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-044665                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-bphl7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-044665             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-044665    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-w5t4l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-044665             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 73s)  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 73s)  kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 73s)  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-044665 event: Registered Node embed-certs-044665 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-044665 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c5c5f02c571553b930fafbf2c3689828949cca8cc638b0decadf0294da34673] <==
	{"level":"warn","ts":"2025-11-19T22:58:36.324392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.335089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.386989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.441849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.487299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.529675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.570463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.642084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.704290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.757991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.807444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.845128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.914091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:36.952890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.011545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.068906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.131693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.163956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.215093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.278749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.299659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.339109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.362607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.402744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:37.540039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37166","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:59:43 up  4:41,  0 user,  load average: 2.14, 2.94, 2.50
	Linux embed-certs-044665 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e3d31a6c9e3e5bb9786ca48531b1da25e7daf20e90f6816682a42ecfe8b5f64] <==
	I1119 22:58:47.724280       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:58:47.724504       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:58:47.724633       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:58:47.724863       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:58:47.724882       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:58:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:58:47.922585       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:58:47.922613       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:58:47.922622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:58:47.924962       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:59:17.922264       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:59:17.923489       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:59:17.923504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:59:17.925978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:59:19.322800       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:59:19.322834       1 metrics.go:72] Registering metrics
	I1119 22:59:19.322952       1 controller.go:711] "Syncing nftables rules"
	I1119 22:59:27.922983       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:59:27.923045       1 main.go:301] handling current node
	I1119 22:59:37.924650       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:59:37.924683       1 main.go:301] handling current node
	
	
	==> kube-apiserver [13a5620a7e2113f0cbb006adc22ce2fb629cc44ec987d363d57e0ff44e0a5205] <==
	I1119 22:58:38.758105       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:58:38.758145       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1119 22:58:38.782696       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:58:38.797401       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:58:38.797439       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:58:38.798475       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:58:38.832069       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:58:38.835043       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:58:39.430413       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:58:39.440729       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:58:39.440757       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:58:40.436360       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:58:40.498937       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:58:40.640021       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:58:40.648910       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:58:40.650214       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:58:40.655416       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:58:40.689989       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:58:41.840401       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:58:41.877635       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:58:41.907030       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:58:46.394750       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:58:46.400796       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:58:46.691457       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:58:46.884719       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c342ca4974b31ae6cba8fee27627e946249d04ad1a50301da79f08d821aa5f51] <==
	I1119 22:58:45.857503       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:58:45.857576       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-044665"
	I1119 22:58:45.857617       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:58:45.858426       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:58:45.858466       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:58:45.867253       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:58:45.867305       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:58:45.867323       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:58:45.867328       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:58:45.867333       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:58:45.867850       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:58:45.894930       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:58:45.895169       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:58:45.896332       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:58:45.896814       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:58:45.898703       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:58:45.911207       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:58:45.912667       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:58:45.932269       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:58:45.933381       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:58:45.940792       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:58:45.940832       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:58:45.959315       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:58:46.012592       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-044665" podCIDRs=["10.244.0.0/24"]
	I1119 22:59:30.864392       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [43d559f286200a43fb81bb1e0f8a9d396ded7d3f5ee776a0ea42be7c54fd2b89] <==
	I1119 22:58:47.667362       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:58:47.801050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:58:47.901384       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:58:47.901422       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:58:47.901487       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:58:48.027157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:58:48.027224       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:58:48.049313       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:58:48.049688       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:58:48.049705       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:58:48.051814       1 config.go:200] "Starting service config controller"
	I1119 22:58:48.051826       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:58:48.051843       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:58:48.051848       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:58:48.051876       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:58:48.051881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:58:48.052608       1 config.go:309] "Starting node config controller"
	I1119 22:58:48.052617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:58:48.052623       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:58:48.154966       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:58:48.155024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:58:48.155068       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [58c9086ddc92da80106c4cb8a1e9f57921f451996773a792e40bb1f73219a722] <==
	E1119 22:58:38.723512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:58:38.723623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:58:38.723789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:58:38.723936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:58:38.723994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:58:38.724070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:58:38.724121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:58:38.724155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:58:38.724199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:58:38.725705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:58:38.725828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:58:39.555741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:58:39.635258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:58:39.638654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:58:39.652709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:58:39.767315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 22:58:39.812064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:58:39.860812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:58:39.872698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:58:39.872853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:58:39.961804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:58:40.007337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:58:40.095970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:58:40.099303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1119 22:58:42.888307       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:58:43 embed-certs-044665 kubelet[1333]: I1119 22:58:43.386221    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-044665" podStartSLOduration=1.386200093 podStartE2EDuration="1.386200093s" podCreationTimestamp="2025-11-19 22:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:58:43.367283951 +0000 UTC m=+1.597655893" watchObservedRunningTime="2025-11-19 22:58:43.386200093 +0000 UTC m=+1.616572035"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.015218    1333 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.019344    1333 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.779266    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aaa92ce4-cadd-40ec-aa55-4a007a59e54b-kube-proxy\") pod \"kube-proxy-w5t4l\" (UID: \"aaa92ce4-cadd-40ec-aa55-4a007a59e54b\") " pod="kube-system/kube-proxy-w5t4l"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.779336    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaa92ce4-cadd-40ec-aa55-4a007a59e54b-lib-modules\") pod \"kube-proxy-w5t4l\" (UID: \"aaa92ce4-cadd-40ec-aa55-4a007a59e54b\") " pod="kube-system/kube-proxy-w5t4l"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.779409    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8hk4\" (UniqueName: \"kubernetes.io/projected/aaa92ce4-cadd-40ec-aa55-4a007a59e54b-kube-api-access-k8hk4\") pod \"kube-proxy-w5t4l\" (UID: \"aaa92ce4-cadd-40ec-aa55-4a007a59e54b\") " pod="kube-system/kube-proxy-w5t4l"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.779463    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaa92ce4-cadd-40ec-aa55-4a007a59e54b-xtables-lock\") pod \"kube-proxy-w5t4l\" (UID: \"aaa92ce4-cadd-40ec-aa55-4a007a59e54b\") " pod="kube-system/kube-proxy-w5t4l"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.880015    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d19c80b2-4ab0-4850-8ffa-65b62e4121f6-cni-cfg\") pod \"kindnet-bphl7\" (UID: \"d19c80b2-4ab0-4850-8ffa-65b62e4121f6\") " pod="kube-system/kindnet-bphl7"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.880093    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d19c80b2-4ab0-4850-8ffa-65b62e4121f6-xtables-lock\") pod \"kindnet-bphl7\" (UID: \"d19c80b2-4ab0-4850-8ffa-65b62e4121f6\") " pod="kube-system/kindnet-bphl7"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.880145    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d19c80b2-4ab0-4850-8ffa-65b62e4121f6-lib-modules\") pod \"kindnet-bphl7\" (UID: \"d19c80b2-4ab0-4850-8ffa-65b62e4121f6\") " pod="kube-system/kindnet-bphl7"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.880168    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vptf\" (UniqueName: \"kubernetes.io/projected/d19c80b2-4ab0-4850-8ffa-65b62e4121f6-kube-api-access-4vptf\") pod \"kindnet-bphl7\" (UID: \"d19c80b2-4ab0-4850-8ffa-65b62e4121f6\") " pod="kube-system/kindnet-bphl7"
	Nov 19 22:58:46 embed-certs-044665 kubelet[1333]: I1119 22:58:46.968244    1333 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:58:47 embed-certs-044665 kubelet[1333]: W1119 22:58:47.417434    1333 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/crio-2b8c477c6111f43ae6c71661f8ac1c6c4cefa4a19405518c30d4338a08f9c17d WatchSource:0}: Error finding container 2b8c477c6111f43ae6c71661f8ac1c6c4cefa4a19405518c30d4338a08f9c17d: Status 404 returned error can't find the container with id 2b8c477c6111f43ae6c71661f8ac1c6c4cefa4a19405518c30d4338a08f9c17d
	Nov 19 22:58:48 embed-certs-044665 kubelet[1333]: I1119 22:58:48.363816    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w5t4l" podStartSLOduration=2.363796328 podStartE2EDuration="2.363796328s" podCreationTimestamp="2025-11-19 22:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:58:48.342836543 +0000 UTC m=+6.573208477" watchObservedRunningTime="2025-11-19 22:58:48.363796328 +0000 UTC m=+6.594168262"
	Nov 19 22:58:48 embed-certs-044665 kubelet[1333]: I1119 22:58:48.707362    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bphl7" podStartSLOduration=2.70733839 podStartE2EDuration="2.70733839s" podCreationTimestamp="2025-11-19 22:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:58:48.365216138 +0000 UTC m=+6.595588080" watchObservedRunningTime="2025-11-19 22:58:48.70733839 +0000 UTC m=+6.937710323"
	Nov 19 22:59:28 embed-certs-044665 kubelet[1333]: I1119 22:59:28.007421    1333 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:59:28 embed-certs-044665 kubelet[1333]: I1119 22:59:28.108021    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd801ea5-7011-49a7-be54-65189f230b9e-config-volume\") pod \"coredns-66bc5c9577-kcs7v\" (UID: \"fd801ea5-7011-49a7-be54-65189f230b9e\") " pod="kube-system/coredns-66bc5c9577-kcs7v"
	Nov 19 22:59:28 embed-certs-044665 kubelet[1333]: I1119 22:59:28.108129    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbcvs\" (UniqueName: \"kubernetes.io/projected/fd801ea5-7011-49a7-be54-65189f230b9e-kube-api-access-wbcvs\") pod \"coredns-66bc5c9577-kcs7v\" (UID: \"fd801ea5-7011-49a7-be54-65189f230b9e\") " pod="kube-system/coredns-66bc5c9577-kcs7v"
	Nov 19 22:59:28 embed-certs-044665 kubelet[1333]: I1119 22:59:28.208647    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghxq\" (UniqueName: \"kubernetes.io/projected/0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3-kube-api-access-6ghxq\") pod \"storage-provisioner\" (UID: \"0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3\") " pod="kube-system/storage-provisioner"
	Nov 19 22:59:28 embed-certs-044665 kubelet[1333]: I1119 22:59:28.209028    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3-tmp\") pod \"storage-provisioner\" (UID: \"0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3\") " pod="kube-system/storage-provisioner"
	Nov 19 22:59:28 embed-certs-044665 kubelet[1333]: W1119 22:59:28.376683    1333 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/crio-d5cb5cd48d91c5df77db74d3254e02eff32c8ad035a606213d87bd6734fadd3f WatchSource:0}: Error finding container d5cb5cd48d91c5df77db74d3254e02eff32c8ad035a606213d87bd6734fadd3f: Status 404 returned error can't find the container with id d5cb5cd48d91c5df77db74d3254e02eff32c8ad035a606213d87bd6734fadd3f
	Nov 19 22:59:29 embed-certs-044665 kubelet[1333]: I1119 22:59:29.462466    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.462446814 podStartE2EDuration="41.462446814s" podCreationTimestamp="2025-11-19 22:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:59:29.448795448 +0000 UTC m=+47.679167390" watchObservedRunningTime="2025-11-19 22:59:29.462446814 +0000 UTC m=+47.692818747"
	Nov 19 22:59:31 embed-certs-044665 kubelet[1333]: I1119 22:59:31.456028    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kcs7v" podStartSLOduration=44.455987038 podStartE2EDuration="44.455987038s" podCreationTimestamp="2025-11-19 22:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:59:29.463793187 +0000 UTC m=+47.694165129" watchObservedRunningTime="2025-11-19 22:59:31.455987038 +0000 UTC m=+49.686358980"
	Nov 19 22:59:31 embed-certs-044665 kubelet[1333]: I1119 22:59:31.530191    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkj7\" (UniqueName: \"kubernetes.io/projected/e2c91413-2762-471a-bbcc-cb2b7e0ac3fc-kube-api-access-8qkj7\") pod \"busybox\" (UID: \"e2c91413-2762-471a-bbcc-cb2b7e0ac3fc\") " pod="default/busybox"
	Nov 19 22:59:34 embed-certs-044665 kubelet[1333]: I1119 22:59:34.463352    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.242608293 podStartE2EDuration="3.463333165s" podCreationTimestamp="2025-11-19 22:59:31 +0000 UTC" firstStartedPulling="2025-11-19 22:59:31.796704563 +0000 UTC m=+50.027076505" lastFinishedPulling="2025-11-19 22:59:34.017429435 +0000 UTC m=+52.247801377" observedRunningTime="2025-11-19 22:59:34.463129463 +0000 UTC m=+52.693501397" watchObservedRunningTime="2025-11-19 22:59:34.463333165 +0000 UTC m=+52.693705107"
	
	
	==> storage-provisioner [4800d4da27a9ecbebcab22fe790be8cf3ce7ec2f631480295cce74588ea828b0] <==
	I1119 22:59:28.506103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:59:28.538947       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:59:28.538998       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:59:28.541363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:28.550350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:59:28.550575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:59:28.550763       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-044665_9473202c-f42c-44da-9b72-52d77cb29733!
	I1119 22:59:28.552304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5b69f9e-50f8-4cbb-93e0-ac4960fffe1d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-044665_9473202c-f42c-44da-9b72-52d77cb29733 became leader
	W1119 22:59:28.559613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:28.564933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:59:28.652599       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-044665_9473202c-f42c-44da-9b72-52d77cb29733!
	W1119 22:59:30.568349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:30.573338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:32.576892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:32.581635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:34.584837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:34.589615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:36.592796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:36.600081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:38.603462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:38.607999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:40.611944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:40.616505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:42.619540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:42.625645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-044665 -n embed-certs-044665
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-044665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (248.345089ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:59:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-841969 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-841969 describe deploy/metrics-server -n kube-system: exit status 1 (92.986988ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-841969 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-841969
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-841969:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90",
	        "Created": "2025-11-19T22:58:26.666905644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1068958,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:58:26.732241782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/hostname",
	        "HostsPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/hosts",
	        "LogPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90-json.log",
	        "Name": "/default-k8s-diff-port-841969",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-841969:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-841969",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90",
	                "LowerDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-841969",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-841969/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-841969",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-841969",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-841969",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "416348baa58ccbab54f1616ff1d938fc572cd64b1c9559c45413fe5817b81226",
	            "SandboxKey": "/var/run/docker/netns/416348baa58c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-841969": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:ed:cc:73:49:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2c6f4446420675e07c5c2c03d34bfff2c1cc2a3ba7ca61bbc8161387b161d43",
	                    "EndpointID": "fff78802d635081a2f671f4f678881973206b010bfd14bf735f2a0984bd9643c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-841969",
	                        "20b80382d56c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-841969 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-841969 logs -n 25: (1.199231235s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ delete  │ -p cert-expiration-943214                                                                                                                                                                                                                     │ cert-expiration-943214       │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:55 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:55 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-191961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p old-k8s-version-191961 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:56 UTC │
	│ start   │ -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                                                                                               │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                                                                                                    │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-553369                                                                                                                                                                                                               │ disable-driver-mounts-553369 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:58:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:58:20.444742 1068518 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:20.445260 1068518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:20.445295 1068518 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:20.445315 1068518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:20.445632 1068518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:58:20.446110 1068518 out.go:368] Setting JSON to false
	I1119 22:58:20.447106 1068518 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16829,"bootTime":1763576271,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:58:20.447205 1068518 start.go:143] virtualization:  
	I1119 22:58:20.453397 1068518 out.go:179] * [default-k8s-diff-port-841969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:58:20.456812 1068518 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:20.456888 1068518 notify.go:221] Checking for updates...
	I1119 22:58:20.464083 1068518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:20.467307 1068518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:58:20.470419 1068518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:58:20.473542 1068518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:58:20.476555 1068518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:20.483543 1068518 config.go:182] Loaded profile config "embed-certs-044665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:20.483707 1068518 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:20.531110 1068518 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:58:20.531242 1068518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:20.642353 1068518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:58:20.63128231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:20.642467 1068518 docker.go:319] overlay module found
	I1119 22:58:20.645730 1068518 out.go:179] * Using the docker driver based on user configuration
	I1119 22:58:20.648537 1068518 start.go:309] selected driver: docker
	I1119 22:58:20.648568 1068518 start.go:930] validating driver "docker" against <nil>
	I1119 22:58:20.648584 1068518 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:20.649368 1068518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:58:20.746439 1068518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:58:20.73545528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:58:20.746636 1068518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:58:20.746990 1068518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:20.750142 1068518 out.go:179] * Using Docker driver with root privileges
	I1119 22:58:20.753012 1068518 cni.go:84] Creating CNI manager for ""
	I1119 22:58:20.753086 1068518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:20.753102 1068518 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:58:20.753192 1068518 start.go:353] cluster config:
	{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:20.756440 1068518 out.go:179] * Starting "default-k8s-diff-port-841969" primary control-plane node in "default-k8s-diff-port-841969" cluster
	I1119 22:58:20.759270 1068518 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:58:20.762331 1068518 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:58:20.765320 1068518 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:20.765379 1068518 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 22:58:20.765402 1068518 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:20.765526 1068518 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 22:58:20.765543 1068518 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:20.765667 1068518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 22:58:20.765693 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json: {Name:mk2baab3e43eca41d665b1ba11e60ade6847b5ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:20.765909 1068518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:58:20.792559 1068518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:58:20.792586 1068518 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:58:20.792612 1068518 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:58:20.792647 1068518 start.go:360] acquireMachinesLock for default-k8s-diff-port-841969: {Name:mke5d323374b95cff07c96188997ebbdcf73748f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:20.792767 1068518 start.go:364] duration metric: took 99.825µs to acquireMachinesLock for "default-k8s-diff-port-841969"
	I1119 22:58:20.792813 1068518 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:58:20.792897 1068518 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:58:18.489545 1065173 out.go:252]   - Generating certificates and keys ...
	I1119 22:58:18.489672 1065173 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:58:18.489769 1065173 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:58:18.883573 1065173 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:58:19.751146 1065173 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:58:20.740199 1065173 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:58:20.796409 1068518 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:58:20.796721 1068518 start.go:159] libmachine.API.Create for "default-k8s-diff-port-841969" (driver="docker")
	I1119 22:58:20.796784 1068518 client.go:173] LocalClient.Create starting
	I1119 22:58:20.796894 1068518 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 22:58:20.796953 1068518 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:20.796984 1068518 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:20.797077 1068518 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 22:58:20.797133 1068518 main.go:143] libmachine: Decoding PEM data...
	I1119 22:58:20.797153 1068518 main.go:143] libmachine: Parsing certificate...
	I1119 22:58:20.797705 1068518 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:58:20.824696 1068518 cli_runner.go:211] docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:58:20.824812 1068518 network_create.go:284] running [docker network inspect default-k8s-diff-port-841969] to gather additional debugging logs...
	I1119 22:58:20.824836 1068518 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969
	W1119 22:58:20.842395 1068518 cli_runner.go:211] docker network inspect default-k8s-diff-port-841969 returned with exit code 1
	I1119 22:58:20.842448 1068518 network_create.go:287] error running [docker network inspect default-k8s-diff-port-841969]: docker network inspect default-k8s-diff-port-841969: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-841969 not found
	I1119 22:58:20.842472 1068518 network_create.go:289] output of [docker network inspect default-k8s-diff-port-841969]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-841969 not found
	
	** /stderr **
	I1119 22:58:20.842598 1068518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:58:20.862297 1068518 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
	I1119 22:58:20.862754 1068518 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-409f9deb7199 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:64:cf:3b:93:91} reservation:<nil>}
	I1119 22:58:20.863285 1068518 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-601de6a5616d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:01:2f:20:8b:a3} reservation:<nil>}
	I1119 22:58:20.863632 1068518 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-15bc9118c71b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2e:ce:98:17:dc:31} reservation:<nil>}
	I1119 22:58:20.864233 1068518 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019779d0}
	I1119 22:58:20.864264 1068518 network_create.go:124] attempt to create docker network default-k8s-diff-port-841969 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:58:20.864342 1068518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 default-k8s-diff-port-841969
	I1119 22:58:20.934965 1068518 network_create.go:108] docker network default-k8s-diff-port-841969 192.168.85.0/24 created
	I1119 22:58:20.934995 1068518 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-841969" container
	I1119 22:58:20.935070 1068518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:58:20.952658 1068518 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-841969 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:58:20.970753 1068518 oci.go:103] Successfully created a docker volume default-k8s-diff-port-841969
	I1119 22:58:20.970848 1068518 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-841969-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --entrypoint /usr/bin/test -v default-k8s-diff-port-841969:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:58:21.669512 1068518 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-841969
	I1119 22:58:21.669581 1068518 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:21.669591 1068518 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:58:21.669667 1068518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-841969:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:58:21.382919 1065173 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:58:21.937531 1065173 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:58:21.938150 1065173 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-044665 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:58:22.527220 1065173 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:58:22.527363 1065173 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-044665 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:58:25.215965 1065173 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:58:25.350071 1065173 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:58:25.915178 1065173 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:58:25.915436 1065173 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:58:26.215793 1065173 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:58:26.998241 1065173 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:58:27.732391 1065173 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:58:28.762769 1065173 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:58:29.791923 1065173 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:58:29.792764 1065173 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:58:29.795390 1065173 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:58:26.574917 1068518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-841969:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.905174289s)
	I1119 22:58:26.574945 1068518 kic.go:203] duration metric: took 4.905351258s to extract preloaded images to volume ...
	W1119 22:58:26.575104 1068518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 22:58:26.575211 1068518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:58:26.651610 1068518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-841969 --name default-k8s-diff-port-841969 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-841969 --network default-k8s-diff-port-841969 --ip 192.168.85.2 --volume default-k8s-diff-port-841969:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:58:27.036257 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Running}}
	I1119 22:58:27.060479 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:58:27.089210 1068518 cli_runner.go:164] Run: docker exec default-k8s-diff-port-841969 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:58:27.148315 1068518 oci.go:144] the created container "default-k8s-diff-port-841969" has a running status.
	I1119 22:58:27.148346 1068518 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa...
	I1119 22:58:27.569726 1068518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:58:27.599280 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:58:27.641593 1068518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:58:27.641618 1068518 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-841969 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:58:27.725505 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:58:27.752346 1068518 machine.go:94] provisionDockerMachine start ...
	I1119 22:58:27.752450 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:27.791275 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:27.791618 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:27.791628 1068518 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:58:27.792248 1068518 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 22:58:29.798900 1065173 out.go:252]   - Booting up control plane ...
	I1119 22:58:29.799022 1065173 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:58:29.799117 1065173 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:58:29.799200 1065173 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:58:29.816865 1065173 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:58:29.816980 1065173 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:58:29.825212 1065173 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:58:29.825618 1065173 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:58:29.825677 1065173 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:58:29.964391 1065173 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:58:29.964515 1065173 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:58:30.983571 1065173 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.012424207s
	I1119 22:58:30.983886 1065173 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:58:30.983977 1065173 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 22:58:30.984070 1065173 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:58:30.984153 1065173 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:58:30.950750 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 22:58:30.950781 1068518 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-841969"
	I1119 22:58:30.950888 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:30.975531 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:30.975935 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:30.975964 1068518 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-841969 && echo "default-k8s-diff-port-841969" | sudo tee /etc/hostname
	I1119 22:58:31.137351 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 22:58:31.137525 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:31.155184 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:31.155507 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:31.155531 1068518 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-841969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-841969/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-841969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:58:31.303308 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:58:31.303340 1068518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 22:58:31.303372 1068518 ubuntu.go:190] setting up certificates
	I1119 22:58:31.303382 1068518 provision.go:84] configureAuth start
	I1119 22:58:31.303445 1068518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 22:58:31.321370 1068518 provision.go:143] copyHostCerts
	I1119 22:58:31.321442 1068518 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 22:58:31.321455 1068518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 22:58:31.321535 1068518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 22:58:31.321640 1068518 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 22:58:31.321653 1068518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 22:58:31.321683 1068518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 22:58:31.321735 1068518 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 22:58:31.321746 1068518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 22:58:31.321771 1068518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 22:58:31.321860 1068518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-841969 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-841969 localhost minikube]
	I1119 22:58:31.866692 1068518 provision.go:177] copyRemoteCerts
	I1119 22:58:31.866773 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:58:31.866824 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:31.886895 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.003595 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:58:32.035965 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:58:32.065533 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:58:32.104140 1068518 provision.go:87] duration metric: took 800.733678ms to configureAuth
	I1119 22:58:32.104208 1068518 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:58:32.104413 1068518 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:32.104559 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.128525 1068518 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:32.128834 1068518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33866 <nil> <nil>}
	I1119 22:58:32.128850 1068518 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:58:32.519213 1068518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:58:32.519282 1068518 machine.go:97] duration metric: took 4.766909664s to provisionDockerMachine
	I1119 22:58:32.519310 1068518 client.go:176] duration metric: took 11.722514024s to LocalClient.Create
	I1119 22:58:32.519343 1068518 start.go:167] duration metric: took 11.722623023s to libmachine.API.Create "default-k8s-diff-port-841969"
	I1119 22:58:32.519380 1068518 start.go:293] postStartSetup for "default-k8s-diff-port-841969" (driver="docker")
	I1119 22:58:32.519406 1068518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:58:32.519503 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:58:32.519587 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.552878 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.677329 1068518 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:58:32.681354 1068518 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:58:32.681381 1068518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:58:32.681393 1068518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 22:58:32.681446 1068518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 22:58:32.681520 1068518 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 22:58:32.681620 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:58:32.694607 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:58:32.723807 1068518 start.go:296] duration metric: took 204.395932ms for postStartSetup
	I1119 22:58:32.724242 1068518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 22:58:32.751974 1068518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 22:58:32.752250 1068518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:58:32.752293 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.790627 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.905805 1068518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:58:32.912709 1068518 start.go:128] duration metric: took 12.11979618s to createHost
	I1119 22:58:32.912731 1068518 start.go:83] releasing machines lock for "default-k8s-diff-port-841969", held for 12.119946023s
	I1119 22:58:32.912811 1068518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 22:58:32.939224 1068518 ssh_runner.go:195] Run: cat /version.json
	I1119 22:58:32.939279 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.939548 1068518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:58:32.939607 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:58:32.980452 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:32.987101 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:58:33.206755 1068518 ssh_runner.go:195] Run: systemctl --version
	I1119 22:58:33.215722 1068518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:58:33.297569 1068518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:58:33.302047 1068518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:58:33.302151 1068518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:58:33.345919 1068518 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 22:58:33.345960 1068518 start.go:496] detecting cgroup driver to use...
	I1119 22:58:33.346018 1068518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 22:58:33.346092 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:58:33.371137 1068518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:58:33.393584 1068518 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:58:33.393672 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:58:33.421153 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:58:33.451558 1068518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:58:33.653858 1068518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:58:33.874546 1068518 docker.go:234] disabling docker service ...
	I1119 22:58:33.874665 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:58:33.919152 1068518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:58:33.939327 1068518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:58:34.153149 1068518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:58:34.364650 1068518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:58:34.389599 1068518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:58:34.415364 1068518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:58:34.415457 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.429323 1068518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:58:34.429404 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.443353 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.453496 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.473469 1068518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:58:34.487334 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.500108 1068518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.525760 1068518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:58:34.541474 1068518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:58:34.553119 1068518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:58:34.564560 1068518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:58:34.771517 1068518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:58:34.998581 1068518 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:58:34.998698 1068518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:58:35.004589 1068518 start.go:564] Will wait 60s for crictl version
	I1119 22:58:35.004766 1068518 ssh_runner.go:195] Run: which crictl
	I1119 22:58:35.011944 1068518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:58:35.048798 1068518 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:58:35.048957 1068518 ssh_runner.go:195] Run: crio --version
	I1119 22:58:35.100708 1068518 ssh_runner.go:195] Run: crio --version
	I1119 22:58:35.153672 1068518 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:58:35.156374 1068518 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:58:35.181308 1068518 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:58:35.185769 1068518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:58:35.195431 1068518 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:58:35.195579 1068518 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:35.195634 1068518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:58:35.254550 1068518 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:58:35.254630 1068518 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:58:35.254719 1068518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:58:35.300748 1068518 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:58:35.300768 1068518 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:58:35.300775 1068518 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 22:58:35.300856 1068518 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-841969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:58:35.300934 1068518 ssh_runner.go:195] Run: crio config
	I1119 22:58:35.415297 1068518 cni.go:84] Creating CNI manager for ""
	I1119 22:58:35.415361 1068518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:35.415397 1068518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:58:35.415452 1068518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-841969 NodeName:default-k8s-diff-port-841969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:58:35.415607 1068518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-841969"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:58:35.415698 1068518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:58:35.423726 1068518 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:58:35.423836 1068518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:58:35.431531 1068518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 22:58:35.444763 1068518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:58:35.459032 1068518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 22:58:35.472345 1068518 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:58:35.476634 1068518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:58:35.486173 1068518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:58:35.664874 1068518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:58:35.695274 1068518 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969 for IP: 192.168.85.2
	I1119 22:58:35.695337 1068518 certs.go:195] generating shared ca certs ...
	I1119 22:58:35.695370 1068518 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:35.695543 1068518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 22:58:35.695620 1068518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 22:58:35.695657 1068518 certs.go:257] generating profile certs ...
	I1119 22:58:35.695765 1068518 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key
	I1119 22:58:35.695801 1068518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt with IP's: []
	I1119 22:58:35.890542 1068518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt ...
	I1119 22:58:35.890616 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: {Name:mkffbab5d69d49b454a8bb9ea3dfaa425d14dc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:35.890889 1068518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key ...
	I1119 22:58:35.890930 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key: {Name:mk92fd4ce07959e3e9b1fc2ae6270f3aa98de476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:35.891108 1068518 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d
	I1119 22:58:35.891150 1068518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:58:36.210323 1068518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d ...
	I1119 22:58:36.210409 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d: {Name:mk616dc652f9c93daf58d458da12806b2bd611c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.210649 1068518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d ...
	I1119 22:58:36.210688 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d: {Name:mk3ea2ede1bbcbffaa4e47e9d48fccb357efb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.210837 1068518 certs.go:382] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt.02fb524d -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt
	I1119 22:58:36.210993 1068518 certs.go:386] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key
	I1119 22:58:36.211111 1068518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key
	I1119 22:58:36.211152 1068518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt with IP's: []
	I1119 22:58:36.788484 1068518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt ...
	I1119 22:58:36.788560 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt: {Name:mk9a13cd4ae2f3f341f69c86664fbf0f64b8630a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.788789 1068518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key ...
	I1119 22:58:36.788826 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key: {Name:mkd69c7d076ca3e8ee440f04afc872389e5ede7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:36.789092 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 22:58:36.789162 1068518 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 22:58:36.789204 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:58:36.789258 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:58:36.789320 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:58:36.789373 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 22:58:36.789459 1068518 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 22:58:36.790132 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:58:36.829704 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:58:36.862891 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:58:36.894733 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 22:58:36.921163 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:58:36.939430 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:58:36.958132 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:58:36.976169 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:58:37.010163 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 22:58:37.049003 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:58:37.077773 1068518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 22:58:37.105853 1068518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:58:37.123665 1068518 ssh_runner.go:195] Run: openssl version
	I1119 22:58:37.132181 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 22:58:37.142122 1068518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 22:58:37.146653 1068518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 22:58:37.146717 1068518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 22:58:37.194704 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:58:37.203314 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:58:37.211661 1068518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:58:37.216382 1068518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:58:37.216491 1068518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:58:37.258518 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:58:37.272245 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 22:58:37.281155 1068518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 22:58:37.291276 1068518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 22:58:37.291385 1068518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 22:58:37.338095 1068518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 22:58:37.346562 1068518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:58:37.351225 1068518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:58:37.351326 1068518 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:37.351444 1068518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:58:37.351532 1068518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:58:37.395274 1068518 cri.go:89] found id: ""
	I1119 22:58:37.395397 1068518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:58:37.407418 1068518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:58:37.418268 1068518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:58:37.418427 1068518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:58:37.430242 1068518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:58:37.430310 1068518 kubeadm.go:158] found existing configuration files:
	
	I1119 22:58:37.430394 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:58:37.443171 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:58:37.443289 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:58:37.456752 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:58:37.468517 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:58:37.468584 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:58:37.480168 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:58:37.490460 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:58:37.490525 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:58:37.498295 1068518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:58:37.506666 1068518 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:58:37.506730 1068518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:58:37.514239 1068518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:58:37.637215 1068518 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:58:37.637278 1068518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:58:37.668071 1068518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:58:37.668150 1068518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 22:58:37.668194 1068518 kubeadm.go:319] OS: Linux
	I1119 22:58:37.668246 1068518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:58:37.668300 1068518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 22:58:37.668357 1068518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:58:37.668412 1068518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:58:37.668467 1068518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:58:37.668521 1068518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:58:37.668573 1068518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:58:37.668627 1068518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:58:37.668680 1068518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 22:58:37.782514 1068518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:58:37.782652 1068518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:58:37.782756 1068518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:58:37.794112 1068518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:58:37.800614 1068518 out.go:252]   - Generating certificates and keys ...
	I1119 22:58:37.800717 1068518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:58:37.800791 1068518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:58:38.070272 1068518 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:58:39.100955 1068518 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:58:39.419074 1068518 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:58:39.830382 1068518 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:58:40.118594 1068518 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:58:40.120338 1068518 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-841969 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:58:37.313915 1065173 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.329606547s
	I1119 22:58:38.717095 1065173 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.733286927s
	I1119 22:58:40.986162 1065173 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002308437s
	I1119 22:58:41.033446 1065173 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:58:41.055820 1065173 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:58:41.079783 1065173 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:58:41.079990 1065173 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-044665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:58:41.097985 1065173 kubeadm.go:319] [bootstrap-token] Using token: sii85k.kns7hdnytojwdni2
	I1119 22:58:41.100932 1065173 out.go:252]   - Configuring RBAC rules ...
	I1119 22:58:41.101060 1065173 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:58:41.110930 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:58:41.136983 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:58:41.147227 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:58:41.147366 1065173 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:58:41.153774 1065173 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:58:41.396742 1065173 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:58:41.879405 1065173 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:58:42.402984 1065173 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:58:42.404555 1065173 kubeadm.go:319] 
	I1119 22:58:42.404634 1065173 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:58:42.404647 1065173 kubeadm.go:319] 
	I1119 22:58:42.404729 1065173 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:58:42.404740 1065173 kubeadm.go:319] 
	I1119 22:58:42.404767 1065173 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:58:42.405200 1065173 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:58:42.405271 1065173 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:58:42.405281 1065173 kubeadm.go:319] 
	I1119 22:58:42.405338 1065173 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:58:42.405347 1065173 kubeadm.go:319] 
	I1119 22:58:42.405397 1065173 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:58:42.405405 1065173 kubeadm.go:319] 
	I1119 22:58:42.405459 1065173 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:58:42.405543 1065173 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:58:42.405620 1065173 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:58:42.405631 1065173 kubeadm.go:319] 
	I1119 22:58:42.405903 1065173 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:58:42.405995 1065173 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:58:42.406006 1065173 kubeadm.go:319] 
	I1119 22:58:42.406281 1065173 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sii85k.kns7hdnytojwdni2 \
	I1119 22:58:42.406422 1065173 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 22:58:42.406607 1065173 kubeadm.go:319] 	--control-plane 
	I1119 22:58:42.406620 1065173 kubeadm.go:319] 
	I1119 22:58:42.406898 1065173 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:58:42.406911 1065173 kubeadm.go:319] 
	I1119 22:58:42.407236 1065173 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sii85k.kns7hdnytojwdni2 \
	I1119 22:58:42.407526 1065173 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 22:58:42.414210 1065173 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:58:42.414481 1065173 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:58:42.414619 1065173 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:58:42.414645 1065173 cni.go:84] Creating CNI manager for ""
	I1119 22:58:42.414652 1065173 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:42.419589 1065173 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:58:40.757513 1068518 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:58:40.757767 1068518 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-841969 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:58:41.811507 1068518 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:58:42.592538 1068518 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:58:43.134595 1068518 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:58:43.138801 1068518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:58:43.979326 1068518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:58:44.332916 1068518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:58:44.638545 1068518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:58:44.831966 1068518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:58:45.944374 1068518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:58:45.945118 1068518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:58:45.955212 1068518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:58:42.422462 1065173 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:58:42.428083 1065173 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:58:42.428102 1065173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:58:42.452739 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:58:42.941161 1065173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:58:42.941316 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:42.941400 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044665 minikube.k8s.io/updated_at=2025_11_19T22_58_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=embed-certs-044665 minikube.k8s.io/primary=true
	I1119 22:58:43.200253 1065173 ops.go:34] apiserver oom_adj: -16
	I1119 22:58:43.200350 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:43.700453 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:44.200456 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:44.700880 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:45.200517 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:45.700820 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:46.200898 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:46.700474 1065173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:46.991033 1065173 kubeadm.go:1114] duration metric: took 4.049761856s to wait for elevateKubeSystemPrivileges
	I1119 22:58:46.991062 1065173 kubeadm.go:403] duration metric: took 28.783233471s to StartCluster
	I1119 22:58:46.991080 1065173 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:46.991147 1065173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:58:46.992226 1065173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:58:46.992438 1065173 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:58:46.992579 1065173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:58:46.992844 1065173 config.go:182] Loaded profile config "embed-certs-044665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:46.992877 1065173 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:58:46.992941 1065173 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-044665"
	I1119 22:58:46.992962 1065173 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-044665"
	I1119 22:58:46.992983 1065173 host.go:66] Checking if "embed-certs-044665" exists ...
	I1119 22:58:46.993472 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:46.995524 1065173 addons.go:70] Setting default-storageclass=true in profile "embed-certs-044665"
	I1119 22:58:46.995559 1065173 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044665"
	I1119 22:58:46.995936 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:46.996245 1065173 out.go:179] * Verifying Kubernetes components...
	I1119 22:58:47.000444 1065173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:58:47.034563 1065173 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:58:47.040842 1065173 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:58:47.040870 1065173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:58:47.040956 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:47.055081 1065173 addons.go:239] Setting addon default-storageclass=true in "embed-certs-044665"
	I1119 22:58:47.055135 1065173 host.go:66] Checking if "embed-certs-044665" exists ...
	I1119 22:58:47.055609 1065173 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 22:58:47.087434 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:47.091068 1065173 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:58:47.091095 1065173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:58:47.091177 1065173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 22:58:47.123676 1065173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 22:58:47.734027 1065173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:58:47.734275 1065173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:58:47.769665 1065173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:58:47.808284 1065173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:58:48.404601 1065173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044665" to be "Ready" ...
	I1119 22:58:48.405731 1065173 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:58:48.773222 1065173 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:58:45.958744 1068518 out.go:252]   - Booting up control plane ...
	I1119 22:58:45.958852 1068518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:58:45.967395 1068518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:58:45.969505 1068518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:58:46.015700 1068518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:58:46.015816 1068518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:58:46.038171 1068518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:58:46.038488 1068518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:58:46.038645 1068518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:58:46.206571 1068518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:58:46.206697 1068518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:58:48.210264 1068518 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001601672s
	I1119 22:58:48.211529 1068518 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:58:48.211627 1068518 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1119 22:58:48.211873 1068518 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:58:48.212075 1068518 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:58:48.776091 1065173 addons.go:515] duration metric: took 1.783187506s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:58:48.910305 1065173 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-044665" context rescaled to 1 replicas
	W1119 22:58:50.407919 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:58:52.197771 1068518 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.985559334s
	I1119 22:58:54.377249 1068518 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.164776642s
	I1119 22:58:54.713963 1068518 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502144601s
	I1119 22:58:54.742749 1068518 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:58:54.767848 1068518 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:58:54.786909 1068518 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:58:54.787137 1068518 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-841969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:58:54.803803 1068518 kubeadm.go:319] [bootstrap-token] Using token: a8tgfv.98xha8e3gtrfgpvq
	I1119 22:58:54.806774 1068518 out.go:252]   - Configuring RBAC rules ...
	I1119 22:58:54.806953 1068518 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:58:54.814112 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:58:54.833020 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:58:54.838734 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:58:54.845274 1068518 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:58:54.850363 1068518 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:58:55.120774 1068518 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:58:55.561258 1068518 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:58:56.120975 1068518 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:58:56.122195 1068518 kubeadm.go:319] 
	I1119 22:58:56.122269 1068518 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:58:56.122276 1068518 kubeadm.go:319] 
	I1119 22:58:56.122356 1068518 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:58:56.122360 1068518 kubeadm.go:319] 
	I1119 22:58:56.122387 1068518 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:58:56.122449 1068518 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:58:56.122501 1068518 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:58:56.122506 1068518 kubeadm.go:319] 
	I1119 22:58:56.122562 1068518 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:58:56.122567 1068518 kubeadm.go:319] 
	I1119 22:58:56.122616 1068518 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:58:56.122621 1068518 kubeadm.go:319] 
	I1119 22:58:56.122675 1068518 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:58:56.122753 1068518 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:58:56.122824 1068518 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:58:56.122829 1068518 kubeadm.go:319] 
	I1119 22:58:56.122943 1068518 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:58:56.123033 1068518 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:58:56.123038 1068518 kubeadm.go:319] 
	I1119 22:58:56.123126 1068518 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token a8tgfv.98xha8e3gtrfgpvq \
	I1119 22:58:56.123234 1068518 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 22:58:56.123255 1068518 kubeadm.go:319] 	--control-plane 
	I1119 22:58:56.123260 1068518 kubeadm.go:319] 
	I1119 22:58:56.123348 1068518 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:58:56.123353 1068518 kubeadm.go:319] 
	I1119 22:58:56.123438 1068518 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token a8tgfv.98xha8e3gtrfgpvq \
	I1119 22:58:56.123544 1068518 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 22:58:56.127702 1068518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 22:58:56.127937 1068518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 22:58:56.128052 1068518 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:58:56.128071 1068518 cni.go:84] Creating CNI manager for ""
	I1119 22:58:56.128082 1068518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:58:56.131227 1068518 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1119 22:58:52.408031 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:58:54.907856 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:58:56.134145 1068518 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:58:56.138535 1068518 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:58:56.138595 1068518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:58:56.154457 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:58:56.500372 1068518 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:58:56.500526 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:56.500621 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-841969 minikube.k8s.io/updated_at=2025_11_19T22_58_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-841969 minikube.k8s.io/primary=true
	I1119 22:58:56.779174 1068518 ops.go:34] apiserver oom_adj: -16
	I1119 22:58:56.779278 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:57.280053 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:57.779636 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:58.280063 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:58.779741 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:59.279700 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:58:59.780119 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:59:00.280057 1068518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:59:00.433252 1068518 kubeadm.go:1114] duration metric: took 3.932780413s to wait for elevateKubeSystemPrivileges
	I1119 22:59:00.433282 1068518 kubeadm.go:403] duration metric: took 23.081960035s to StartCluster
	I1119 22:59:00.433299 1068518 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:00.433365 1068518 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:59:00.435187 1068518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:00.435708 1068518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:59:00.435711 1068518 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:00.436017 1068518 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:00.436067 1068518 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:59:00.436130 1068518 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-841969"
	I1119 22:59:00.436148 1068518 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-841969"
	I1119 22:59:00.436172 1068518 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 22:59:00.436375 1068518 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-841969"
	I1119 22:59:00.436419 1068518 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-841969"
	I1119 22:59:00.436682 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:59:00.436797 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:59:00.439202 1068518 out.go:179] * Verifying Kubernetes components...
	I1119 22:59:00.443120 1068518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:00.479639 1068518 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1119 22:58:56.908058 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:58:59.408214 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:59:00.482374 1068518 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:59:00.482395 1068518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:59:00.482465 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:59:00.485747 1068518 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-841969"
	I1119 22:59:00.485794 1068518 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 22:59:00.486238 1068518 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 22:59:00.531020 1068518 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:59:00.531043 1068518 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:59:00.531123 1068518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 22:59:00.551282 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:59:00.568367 1068518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33866 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 22:59:00.888472 1068518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:59:00.888587 1068518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:00.897763 1068518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:59:00.974170 1068518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:59:01.514177 1068518 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:59:01.514897 1068518 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-841969" to be "Ready" ...
	I1119 22:59:01.943346 1068518 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 22:59:01.946313 1068518 addons.go:515] duration metric: took 1.510225713s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 22:59:02.020332 1068518 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-841969" context rescaled to 1 replicas
	W1119 22:59:03.518359 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:01.409237 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:03.908019 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:05.908326 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:05.518672 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:08.019092 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:08.408193 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:10.908157 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:10.518559 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:13.018807 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:15.019752 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:12.908322 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:14.908392 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:17.517820 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:19.518187 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:17.407779 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:19.408573 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:21.518332 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:24.017718 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:21.908146 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:24.408542 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	W1119 22:59:26.017985 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:28.518403 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:26.908153 1065173 node_ready.go:57] node "embed-certs-044665" has "Ready":"False" status (will retry)
	I1119 22:59:28.413147 1065173 node_ready.go:49] node "embed-certs-044665" is "Ready"
	I1119 22:59:28.413181 1065173 node_ready.go:38] duration metric: took 40.008497398s for node "embed-certs-044665" to be "Ready" ...
	I1119 22:59:28.413195 1065173 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:59:28.413250 1065173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:59:28.430099 1065173 api_server.go:72] duration metric: took 41.43762575s to wait for apiserver process to appear ...
	I1119 22:59:28.430125 1065173 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:59:28.430144 1065173 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:59:28.440450 1065173 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:59:28.446200 1065173 api_server.go:141] control plane version: v1.34.1
	I1119 22:59:28.446229 1065173 api_server.go:131] duration metric: took 16.096292ms to wait for apiserver health ...
	I1119 22:59:28.446239 1065173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:59:28.451023 1065173 system_pods.go:59] 8 kube-system pods found
	I1119 22:59:28.451060 1065173 system_pods.go:61] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:28.451067 1065173 system_pods.go:61] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:28.451073 1065173 system_pods.go:61] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:28.451079 1065173 system_pods.go:61] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:28.451085 1065173 system_pods.go:61] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:28.451091 1065173 system_pods.go:61] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:28.451095 1065173 system_pods.go:61] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:28.451101 1065173 system_pods.go:61] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:28.451110 1065173 system_pods.go:74] duration metric: took 4.862479ms to wait for pod list to return data ...
	I1119 22:59:28.451118 1065173 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:59:28.454135 1065173 default_sa.go:45] found service account: "default"
	I1119 22:59:28.454157 1065173 default_sa.go:55] duration metric: took 3.03296ms for default service account to be created ...
	I1119 22:59:28.454171 1065173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:59:28.459699 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:28.459729 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:28.459735 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:28.459742 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:28.459746 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:28.459751 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:28.459754 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:28.459758 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:28.459764 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:28.459783 1065173 retry.go:31] will retry after 273.81481ms: missing components: kube-dns
	I1119 22:59:28.737999 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:28.738047 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:28.738056 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:28.738063 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:28.738067 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:28.738072 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:28.738076 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:28.738080 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:28.738092 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:28.738106 1065173 retry.go:31] will retry after 323.983424ms: missing components: kube-dns
	I1119 22:59:29.065724 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:29.065763 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:29.065770 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:29.065777 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:29.065781 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:29.065785 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:29.065790 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:29.065794 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:29.065800 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:29.065814 1065173 retry.go:31] will retry after 478.611601ms: missing components: kube-dns
	I1119 22:59:29.548836 1065173 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:29.548869 1065173 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Running
	I1119 22:59:29.548877 1065173 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running
	I1119 22:59:29.548882 1065173 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 22:59:29.548886 1065173 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running
	I1119 22:59:29.548891 1065173 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running
	I1119 22:59:29.548895 1065173 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 22:59:29.548900 1065173 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running
	I1119 22:59:29.548905 1065173 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Running
	I1119 22:59:29.548912 1065173 system_pods.go:126] duration metric: took 1.094736434s to wait for k8s-apps to be running ...
	I1119 22:59:29.548925 1065173 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:59:29.548983 1065173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:59:29.563406 1065173 system_svc.go:56] duration metric: took 14.470293ms WaitForService to wait for kubelet
	I1119 22:59:29.563476 1065173 kubeadm.go:587] duration metric: took 42.571006676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:59:29.563502 1065173 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:59:29.566546 1065173 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:59:29.566581 1065173 node_conditions.go:123] node cpu capacity is 2
	I1119 22:59:29.566611 1065173 node_conditions.go:105] duration metric: took 3.101399ms to run NodePressure ...
	I1119 22:59:29.566624 1065173 start.go:242] waiting for startup goroutines ...
	I1119 22:59:29.566662 1065173 start.go:247] waiting for cluster config update ...
	I1119 22:59:29.566682 1065173 start.go:256] writing updated cluster config ...
	I1119 22:59:29.567050 1065173 ssh_runner.go:195] Run: rm -f paused
	I1119 22:59:29.570694 1065173 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:59:29.574950 1065173 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.579581 1065173 pod_ready.go:94] pod "coredns-66bc5c9577-kcs7v" is "Ready"
	I1119 22:59:29.579612 1065173 pod_ready.go:86] duration metric: took 4.632389ms for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.581649 1065173 pod_ready.go:83] waiting for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.585839 1065173 pod_ready.go:94] pod "etcd-embed-certs-044665" is "Ready"
	I1119 22:59:29.585864 1065173 pod_ready.go:86] duration metric: took 4.190737ms for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.588310 1065173 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.592751 1065173 pod_ready.go:94] pod "kube-apiserver-embed-certs-044665" is "Ready"
	I1119 22:59:29.592779 1065173 pod_ready.go:86] duration metric: took 4.442653ms for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.595312 1065173 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:29.975823 1065173 pod_ready.go:94] pod "kube-controller-manager-embed-certs-044665" is "Ready"
	I1119 22:59:29.975855 1065173 pod_ready.go:86] duration metric: took 380.517091ms for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:30.175585 1065173 pod_ready.go:83] waiting for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:30.574950 1065173 pod_ready.go:94] pod "kube-proxy-w5t4l" is "Ready"
	I1119 22:59:30.574980 1065173 pod_ready.go:86] duration metric: took 399.326908ms for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:30.775683 1065173 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:31.176314 1065173 pod_ready.go:94] pod "kube-scheduler-embed-certs-044665" is "Ready"
	I1119 22:59:31.176345 1065173 pod_ready.go:86] duration metric: took 400.63526ms for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:31.176359 1065173 pod_ready.go:40] duration metric: took 1.605630292s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:59:31.232055 1065173 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:59:31.235708 1065173 out.go:179] * Done! kubectl is now configured to use "embed-certs-044665" cluster and "default" namespace by default
	W1119 22:59:30.518438 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:33.017665 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:35.017878 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:37.019468 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:39.517749 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	W1119 22:59:42.034744 1068518 node_ready.go:57] node "default-k8s-diff-port-841969" has "Ready":"False" status (will retry)
	I1119 22:59:42.519272 1068518 node_ready.go:49] node "default-k8s-diff-port-841969" is "Ready"
	I1119 22:59:42.519300 1068518 node_ready.go:38] duration metric: took 41.004367451s for node "default-k8s-diff-port-841969" to be "Ready" ...
	I1119 22:59:42.519314 1068518 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:59:42.519514 1068518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:59:42.575582 1068518 api_server.go:72] duration metric: took 42.139658043s to wait for apiserver process to appear ...
	I1119 22:59:42.575612 1068518 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:59:42.575632 1068518 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1119 22:59:42.631074 1068518 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1119 22:59:42.633370 1068518 api_server.go:141] control plane version: v1.34.1
	I1119 22:59:42.633408 1068518 api_server.go:131] duration metric: took 57.788705ms to wait for apiserver health ...
	I1119 22:59:42.633417 1068518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:59:42.649054 1068518 system_pods.go:59] 8 kube-system pods found
	I1119 22:59:42.649088 1068518 system_pods.go:61] "coredns-66bc5c9577-zkjxn" [1c4a619c-0219-4f38-897a-d3989d4d3ed9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:42.649095 1068518 system_pods.go:61] "etcd-default-k8s-diff-port-841969" [3c8643e2-2f77-48bb-86fd-832d737dc91d] Running
	I1119 22:59:42.649101 1068518 system_pods.go:61] "kindnet-65cjg" [c756094a-14e9-41ea-b7a7-0af539154203] Running
	I1119 22:59:42.649106 1068518 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-841969" [54a2fc1c-cce5-430e-9ffa-c1ef86387118] Running
	I1119 22:59:42.649111 1068518 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-841969" [9cceb253-eba3-4c8e-8d84-acbd3924c0f2] Running
	I1119 22:59:42.649125 1068518 system_pods.go:61] "kube-proxy-fbmdp" [ef28c6ce-40e6-411e-b7e0-1a6b5914c710] Running
	I1119 22:59:42.649133 1068518 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-841969" [0dea9f40-de0a-460e-862e-98beaf3e8971] Running
	I1119 22:59:42.649140 1068518 system_pods.go:61] "storage-provisioner" [c79703f3-5114-46df-8d46-987b4a56f647] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:42.649148 1068518 system_pods.go:74] duration metric: took 15.725156ms to wait for pod list to return data ...
	I1119 22:59:42.649157 1068518 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:59:42.656902 1068518 default_sa.go:45] found service account: "default"
	I1119 22:59:42.656923 1068518 default_sa.go:55] duration metric: took 7.759422ms for default service account to be created ...
	I1119 22:59:42.656933 1068518 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:59:42.768671 1068518 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:42.768847 1068518 system_pods.go:89] "coredns-66bc5c9577-zkjxn" [1c4a619c-0219-4f38-897a-d3989d4d3ed9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:42.768859 1068518 system_pods.go:89] "etcd-default-k8s-diff-port-841969" [3c8643e2-2f77-48bb-86fd-832d737dc91d] Running
	I1119 22:59:42.768872 1068518 system_pods.go:89] "kindnet-65cjg" [c756094a-14e9-41ea-b7a7-0af539154203] Running
	I1119 22:59:42.768876 1068518 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-841969" [54a2fc1c-cce5-430e-9ffa-c1ef86387118] Running
	I1119 22:59:42.768880 1068518 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-841969" [9cceb253-eba3-4c8e-8d84-acbd3924c0f2] Running
	I1119 22:59:42.768885 1068518 system_pods.go:89] "kube-proxy-fbmdp" [ef28c6ce-40e6-411e-b7e0-1a6b5914c710] Running
	I1119 22:59:42.768889 1068518 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-841969" [0dea9f40-de0a-460e-862e-98beaf3e8971] Running
	I1119 22:59:42.768904 1068518 system_pods.go:89] "storage-provisioner" [c79703f3-5114-46df-8d46-987b4a56f647] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:59:42.768926 1068518 retry.go:31] will retry after 254.721356ms: missing components: kube-dns
	I1119 22:59:43.029046 1068518 system_pods.go:86] 8 kube-system pods found
	I1119 22:59:43.029086 1068518 system_pods.go:89] "coredns-66bc5c9577-zkjxn" [1c4a619c-0219-4f38-897a-d3989d4d3ed9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:59:43.029094 1068518 system_pods.go:89] "etcd-default-k8s-diff-port-841969" [3c8643e2-2f77-48bb-86fd-832d737dc91d] Running
	I1119 22:59:43.029100 1068518 system_pods.go:89] "kindnet-65cjg" [c756094a-14e9-41ea-b7a7-0af539154203] Running
	I1119 22:59:43.029106 1068518 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-841969" [54a2fc1c-cce5-430e-9ffa-c1ef86387118] Running
	I1119 22:59:43.029111 1068518 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-841969" [9cceb253-eba3-4c8e-8d84-acbd3924c0f2] Running
	I1119 22:59:43.029117 1068518 system_pods.go:89] "kube-proxy-fbmdp" [ef28c6ce-40e6-411e-b7e0-1a6b5914c710] Running
	I1119 22:59:43.029122 1068518 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-841969" [0dea9f40-de0a-460e-862e-98beaf3e8971] Running
	I1119 22:59:43.029130 1068518 system_pods.go:89] "storage-provisioner" [c79703f3-5114-46df-8d46-987b4a56f647] Running
	I1119 22:59:43.029139 1068518 system_pods.go:126] duration metric: took 372.199418ms to wait for k8s-apps to be running ...
	I1119 22:59:43.029153 1068518 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:59:43.029211 1068518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:59:43.045900 1068518 system_svc.go:56] duration metric: took 16.73757ms WaitForService to wait for kubelet
	I1119 22:59:43.045927 1068518 kubeadm.go:587] duration metric: took 42.610007366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:59:43.045944 1068518 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:59:43.049738 1068518 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 22:59:43.049775 1068518 node_conditions.go:123] node cpu capacity is 2
	I1119 22:59:43.049792 1068518 node_conditions.go:105] duration metric: took 3.842689ms to run NodePressure ...
	I1119 22:59:43.049805 1068518 start.go:242] waiting for startup goroutines ...
	I1119 22:59:43.049813 1068518 start.go:247] waiting for cluster config update ...
	I1119 22:59:43.049824 1068518 start.go:256] writing updated cluster config ...
	I1119 22:59:43.050138 1068518 ssh_runner.go:195] Run: rm -f paused
	I1119 22:59:43.056105 1068518 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:59:43.062089 1068518 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkjxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.069701 1068518 pod_ready.go:94] pod "coredns-66bc5c9577-zkjxn" is "Ready"
	I1119 22:59:44.069730 1068518 pod_ready.go:86] duration metric: took 1.007610063s for pod "coredns-66bc5c9577-zkjxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.073873 1068518 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.081200 1068518 pod_ready.go:94] pod "etcd-default-k8s-diff-port-841969" is "Ready"
	I1119 22:59:44.081226 1068518 pod_ready.go:86] duration metric: took 7.327345ms for pod "etcd-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.084597 1068518 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.091790 1068518 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-841969" is "Ready"
	I1119 22:59:44.091867 1068518 pod_ready.go:86] duration metric: took 7.243389ms for pod "kube-apiserver-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.101399 1068518 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.266961 1068518 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-841969" is "Ready"
	I1119 22:59:44.266988 1068518 pod_ready.go:86] duration metric: took 165.564051ms for pod "kube-controller-manager-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.466837 1068518 pod_ready.go:83] waiting for pod "kube-proxy-fbmdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:44.866615 1068518 pod_ready.go:94] pod "kube-proxy-fbmdp" is "Ready"
	I1119 22:59:44.866647 1068518 pod_ready.go:86] duration metric: took 399.785562ms for pod "kube-proxy-fbmdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:45.101341 1068518 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:45.466025 1068518 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-841969" is "Ready"
	I1119 22:59:45.466057 1068518 pod_ready.go:86] duration metric: took 364.686008ms for pod "kube-scheduler-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:59:45.466072 1068518 pod_ready.go:40] duration metric: took 2.409917277s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:59:45.518348 1068518 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 22:59:45.521691 1068518 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-841969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 22:59:42 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:42.575436296Z" level=info msg="Created container fe523a2ebe7000ae09f8cda554009aa970166ff6c105a1eca81b278d48e5cf03: kube-system/storage-provisioner/storage-provisioner" id=33c53bde-d0ac-4d00-a115-6a562966c7a3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:59:42 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:42.582680851Z" level=info msg="Starting container: fe523a2ebe7000ae09f8cda554009aa970166ff6c105a1eca81b278d48e5cf03" id=d9daba77-b43d-4a60-a795-6bf785fefba0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:59:42 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:42.595275241Z" level=info msg="Started container" PID=1740 containerID=fe523a2ebe7000ae09f8cda554009aa970166ff6c105a1eca81b278d48e5cf03 description=kube-system/storage-provisioner/storage-provisioner id=d9daba77-b43d-4a60-a795-6bf785fefba0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7853e9470a0bdeeacb575276aa28d8e8739b2767c8feb418614773f2794c9663
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.061146307Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fa89ed49-a672-45ae-9eae-f98aaa00f8e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.061222295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.075597606Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd53d8614fb9af0045f33f6e37a474ec90d38caf602a31de7b5f99876b224fa7 UID:bb3c0020-a370-4686-aa7a-f5c0e59492e9 NetNS:/var/run/netns/24c69003-387e-4a84-963f-89630f335c20 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078be0}] Aliases:map[]}"
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.075640404Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.092410525Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd53d8614fb9af0045f33f6e37a474ec90d38caf602a31de7b5f99876b224fa7 UID:bb3c0020-a370-4686-aa7a-f5c0e59492e9 NetNS:/var/run/netns/24c69003-387e-4a84-963f-89630f335c20 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078be0}] Aliases:map[]}"
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.092577615Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.095300394Z" level=info msg="Ran pod sandbox cd53d8614fb9af0045f33f6e37a474ec90d38caf602a31de7b5f99876b224fa7 with infra container: default/busybox/POD" id=fa89ed49-a672-45ae-9eae-f98aaa00f8e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.098806892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4254be7f-ca81-42d6-9794-35132470383b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.099253878Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4254be7f-ca81-42d6-9794-35132470383b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.099307786Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4254be7f-ca81-42d6-9794-35132470383b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.101992706Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6133a40f-d2ad-42b1-88a7-10b6a2cd141c name=/runtime.v1.ImageService/PullImage
	Nov 19 22:59:46 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:46.104698533Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.124207094Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6133a40f-d2ad-42b1-88a7-10b6a2cd141c name=/runtime.v1.ImageService/PullImage
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.125214659Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=185c9db7-0662-4963-b211-7efd77264ba9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.126961783Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8507783-af06-42df-ab20-f434f4273210 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.132351742Z" level=info msg="Creating container: default/busybox/busybox" id=313100e7-14e2-4c40-b880-d7915f4fb12e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.132524051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.137515221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.137993182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.155134234Z" level=info msg="Created container 5d758af3e7f6230e9a0f70ea6fdf20a2dc67fdc06d02a36ba1c8419d64f242ae: default/busybox/busybox" id=313100e7-14e2-4c40-b880-d7915f4fb12e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.159389653Z" level=info msg="Starting container: 5d758af3e7f6230e9a0f70ea6fdf20a2dc67fdc06d02a36ba1c8419d64f242ae" id=cd91b60e-5a69-4e84-b258-8efc42f5efb7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:59:48 default-k8s-diff-port-841969 crio[842]: time="2025-11-19T22:59:48.162762086Z" level=info msg="Started container" PID=1810 containerID=5d758af3e7f6230e9a0f70ea6fdf20a2dc67fdc06d02a36ba1c8419d64f242ae description=default/busybox/busybox id=cd91b60e-5a69-4e84-b258-8efc42f5efb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd53d8614fb9af0045f33f6e37a474ec90d38caf602a31de7b5f99876b224fa7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	5d758af3e7f62       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   cd53d8614fb9a       busybox                                                default
	fe523a2ebe700       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   7853e9470a0bd       storage-provisioner                                    kube-system
	684f084eb9757       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   7ce2663cc877c       coredns-66bc5c9577-zkjxn                               kube-system
	de8fd453f7804       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   7409d87f6f2c1       kube-proxy-fbmdp                                       kube-system
	5c277dc2335ea       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   bb967c1281177       kindnet-65cjg                                          kube-system
	6f87c86040039       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   10d99e773dec2       kube-scheduler-default-k8s-diff-port-841969            kube-system
	35a7522338aa9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   01357ed16c5ba       kube-controller-manager-default-k8s-diff-port-841969   kube-system
	a7184ef6e6e40       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   2845d1e530eb7       etcd-default-k8s-diff-port-841969                      kube-system
	0517d36fe5a69       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   5140c8177aca9       kube-apiserver-default-k8s-diff-port-841969            kube-system
	
	
	==> coredns [684f084eb9757cd357b56251bfc93e4f570f99c82bd5f5a345e3090dd3ef656a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37956 - 53931 "HINFO IN 8600781754524988597.599883209331626739. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.039204925s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-841969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-841969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-841969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_58_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:58:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-841969
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:59:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:59:42 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:59:42 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:59:42 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:59:42 +0000   Wed, 19 Nov 2025 22:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-841969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                8530e068-8eb5-4533-912c-551d1cf1fd1e
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-zkjxn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-841969                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-65cjg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-841969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-841969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-fbmdp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-841969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  67s (x8 over 68s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 68s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 68s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-841969 event: Registered Node default-k8s-diff-port-841969 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-841969 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 22:34] overlayfs: idmapped layers are currently not supported
	[Nov19 22:35] overlayfs: idmapped layers are currently not supported
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a7184ef6e6e409221604502081cd644631db7b1843eb6ec3cfaff7ebf7b10447] <==
	{"level":"warn","ts":"2025-11-19T22:58:50.785883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:50.803105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:50.815563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:50.862219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:50.908450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:50.945400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:50.978716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.003570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.042586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.089761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.111043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.154208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.170624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.199783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.237201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.275938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.310776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.348709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.358063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.381328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.425761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.476333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.519988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.562203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:58:51.675855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49452","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:59:55 up  4:42,  0 user,  load average: 2.03, 2.88, 2.49
	Linux default-k8s-diff-port-841969 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c277dc2335ea22dd1dd408f7f3b496b27bff83cc63cfc7cd923311ffaa26b7b] <==
	I1119 22:59:01.534090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:59:01.534537       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:59:01.543489       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:59:01.543530       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:59:01.543547       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:59:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:59:01.765885       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:59:01.765917       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:59:01.765927       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:59:01.768063       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:59:31.767054       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:59:31.767062       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:59:31.767165       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:59:31.777485       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:59:33.366579       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:59:33.366606       1 metrics.go:72] Registering metrics
	I1119 22:59:33.366660       1 controller.go:711] "Syncing nftables rules"
	I1119 22:59:41.766205       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:59:41.766260       1 main.go:301] handling current node
	I1119 22:59:51.766250       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:59:51.766282       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0517d36fe5a694e510b73fb8405be127bcb074d5a8b19415883fee5241649cc4] <==
	I1119 22:58:52.910197       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:58:52.910205       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:58:52.910212       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:58:52.919545       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:58:52.920083       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:58:52.940351       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:58:52.940444       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:58:53.482720       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:58:53.493857       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:58:53.493939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:58:54.427510       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:58:54.491747       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:58:54.596910       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:58:54.604270       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 22:58:54.605525       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:58:54.610846       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:58:54.739372       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:58:55.539686       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:58:55.560168       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:58:55.573188       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:59:00.580964       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:59:00.811263       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:59:00.830145       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:59:00.832763       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 22:59:53.855249       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:47284: use of closed network connection
	
	
	==> kube-controller-manager [35a7522338aa9b2760ea16a1a9ca5e08abbc15773871c56ce2410ea37ea34fa1] <==
	I1119 22:58:59.783303       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:58:59.783634       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:58:59.784072       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-841969"
	I1119 22:58:59.784194       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:58:59.784150       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:58:59.784818       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:58:59.783545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:58:59.787313       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:58:59.787391       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:58:59.787265       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:58:59.787276       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:58:59.785292       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:58:59.787236       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:58:59.799062       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:58:59.799217       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:58:59.799263       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:58:59.799330       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:58:59.800173       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:58:59.803133       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:58:59.803240       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:58:59.803271       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:58:59.803301       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:58:59.804010       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:58:59.857967       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-841969" podCIDRs=["10.244.0.0/24"]
	I1119 22:59:44.791374       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [de8fd453f7804e8d003004866067ca92ff5fcf9abf27074762363c35ae686fa6] <==
	I1119 22:59:01.997619       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:59:02.088552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:59:02.189247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:59:02.189287       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:59:02.189361       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:59:02.208668       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:59:02.208723       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:59:02.212600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:59:02.212890       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:59:02.212956       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:59:02.215997       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:59:02.216069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:59:02.216373       1 config.go:200] "Starting service config controller"
	I1119 22:59:02.216417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:59:02.216728       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:59:02.218946       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:59:02.217095       1 config.go:309] "Starting node config controller"
	I1119 22:59:02.219121       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:59:02.219149       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:59:02.316293       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:59:02.317559       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:59:02.319114       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6f87c86040039ef072c1a02df86c195c440a6421d43170901fb5da29c423e632] <==
	I1119 22:58:52.373987       1 serving.go:386] Generated self-signed cert in-memory
	W1119 22:58:54.321921       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 22:58:54.322453       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:58:54.322512       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 22:58:54.322547       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 22:58:54.348919       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:58:54.348963       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:58:54.352825       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:58:54.353033       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:58:54.353098       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:58:54.353143       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 22:58:54.367587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1119 22:58:55.853995       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:58:59 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:58:59.943843    1312 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:58:59 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:58:59.944747    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:59:00 default-k8s-diff-port-841969 kubelet[1312]: E1119 22:59:00.970477    1312 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-841969\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-841969' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015322    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef28c6ce-40e6-411e-b7e0-1a6b5914c710-lib-modules\") pod \"kube-proxy-fbmdp\" (UID: \"ef28c6ce-40e6-411e-b7e0-1a6b5914c710\") " pod="kube-system/kube-proxy-fbmdp"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015391    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef28c6ce-40e6-411e-b7e0-1a6b5914c710-xtables-lock\") pod \"kube-proxy-fbmdp\" (UID: \"ef28c6ce-40e6-411e-b7e0-1a6b5914c710\") " pod="kube-system/kube-proxy-fbmdp"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015418    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vcrw\" (UniqueName: \"kubernetes.io/projected/ef28c6ce-40e6-411e-b7e0-1a6b5914c710-kube-api-access-4vcrw\") pod \"kube-proxy-fbmdp\" (UID: \"ef28c6ce-40e6-411e-b7e0-1a6b5914c710\") " pod="kube-system/kube-proxy-fbmdp"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015443    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c756094a-14e9-41ea-b7a7-0af539154203-cni-cfg\") pod \"kindnet-65cjg\" (UID: \"c756094a-14e9-41ea-b7a7-0af539154203\") " pod="kube-system/kindnet-65cjg"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015465    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c756094a-14e9-41ea-b7a7-0af539154203-xtables-lock\") pod \"kindnet-65cjg\" (UID: \"c756094a-14e9-41ea-b7a7-0af539154203\") " pod="kube-system/kindnet-65cjg"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015489    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c756094a-14e9-41ea-b7a7-0af539154203-lib-modules\") pod \"kindnet-65cjg\" (UID: \"c756094a-14e9-41ea-b7a7-0af539154203\") " pod="kube-system/kindnet-65cjg"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015527    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svfpc\" (UniqueName: \"kubernetes.io/projected/c756094a-14e9-41ea-b7a7-0af539154203-kube-api-access-svfpc\") pod \"kindnet-65cjg\" (UID: \"c756094a-14e9-41ea-b7a7-0af539154203\") " pod="kube-system/kindnet-65cjg"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.015548    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef28c6ce-40e6-411e-b7e0-1a6b5914c710-kube-proxy\") pod \"kube-proxy-fbmdp\" (UID: \"ef28c6ce-40e6-411e-b7e0-1a6b5914c710\") " pod="kube-system/kube-proxy-fbmdp"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:01.178587    1312 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: W1119 22:59:01.270820    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/crio-bb967c1281177d68be0985ac918fea7ec9f185202c80f27fbb0caf3726e2c13a WatchSource:0}: Error finding container bb967c1281177d68be0985ac918fea7ec9f185202c80f27fbb0caf3726e2c13a: Status 404 returned error can't find the container with id bb967c1281177d68be0985ac918fea7ec9f185202c80f27fbb0caf3726e2c13a
	Nov 19 22:59:01 default-k8s-diff-port-841969 kubelet[1312]: W1119 22:59:01.863222    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/crio-7409d87f6f2c1600ba6c867485625a8d3b538052ab016cd72db61e8837b0fbef WatchSource:0}: Error finding container 7409d87f6f2c1600ba6c867485625a8d3b538052ab016cd72db61e8837b0fbef: Status 404 returned error can't find the container with id 7409d87f6f2c1600ba6c867485625a8d3b538052ab016cd72db61e8837b0fbef
	Nov 19 22:59:02 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:02.481867    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-65cjg" podStartSLOduration=2.481849669 podStartE2EDuration="2.481849669s" podCreationTimestamp="2025-11-19 22:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:59:01.648261233 +0000 UTC m=+6.298411694" watchObservedRunningTime="2025-11-19 22:59:02.481849669 +0000 UTC m=+7.132000130"
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:42.005600    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:42.085234    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fbmdp" podStartSLOduration=42.085215051 podStartE2EDuration="42.085215051s" podCreationTimestamp="2025-11-19 22:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:59:02.629402561 +0000 UTC m=+7.279553022" watchObservedRunningTime="2025-11-19 22:59:42.085215051 +0000 UTC m=+46.735365520"
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:42.102498    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c4a619c-0219-4f38-897a-d3989d4d3ed9-config-volume\") pod \"coredns-66bc5c9577-zkjxn\" (UID: \"1c4a619c-0219-4f38-897a-d3989d4d3ed9\") " pod="kube-system/coredns-66bc5c9577-zkjxn"
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:42.102984    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fpb\" (UniqueName: \"kubernetes.io/projected/1c4a619c-0219-4f38-897a-d3989d4d3ed9-kube-api-access-r4fpb\") pod \"coredns-66bc5c9577-zkjxn\" (UID: \"1c4a619c-0219-4f38-897a-d3989d4d3ed9\") " pod="kube-system/coredns-66bc5c9577-zkjxn"
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:42.217110    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vxt\" (UniqueName: \"kubernetes.io/projected/c79703f3-5114-46df-8d46-987b4a56f647-kube-api-access-v5vxt\") pod \"storage-provisioner\" (UID: \"c79703f3-5114-46df-8d46-987b4a56f647\") " pod="kube-system/storage-provisioner"
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:42.219036    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c79703f3-5114-46df-8d46-987b4a56f647-tmp\") pod \"storage-provisioner\" (UID: \"c79703f3-5114-46df-8d46-987b4a56f647\") " pod="kube-system/storage-provisioner"
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: W1119 22:59:42.445704    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/crio-7ce2663cc877c6b2bf37310a754691f368d099a834065707ec32dc6344ef8489 WatchSource:0}: Error finding container 7ce2663cc877c6b2bf37310a754691f368d099a834065707ec32dc6344ef8489: Status 404 returned error can't find the container with id 7ce2663cc877c6b2bf37310a754691f368d099a834065707ec32dc6344ef8489
	Nov 19 22:59:42 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:42.765355    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.765333028 podStartE2EDuration="41.765333028s" podCreationTimestamp="2025-11-19 22:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:59:42.762816996 +0000 UTC m=+47.412967473" watchObservedRunningTime="2025-11-19 22:59:42.765333028 +0000 UTC m=+47.415483522"
	Nov 19 22:59:43 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:43.710062    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zkjxn" podStartSLOduration=43.710040843 podStartE2EDuration="43.710040843s" podCreationTimestamp="2025-11-19 22:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:59:42.818168463 +0000 UTC m=+47.468318923" watchObservedRunningTime="2025-11-19 22:59:43.710040843 +0000 UTC m=+48.360191304"
	Nov 19 22:59:45 default-k8s-diff-port-841969 kubelet[1312]: I1119 22:59:45.864262    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9smgl\" (UniqueName: \"kubernetes.io/projected/bb3c0020-a370-4686-aa7a-f5c0e59492e9-kube-api-access-9smgl\") pod \"busybox\" (UID: \"bb3c0020-a370-4686-aa7a-f5c0e59492e9\") " pod="default/busybox"
	
	
	==> storage-provisioner [fe523a2ebe7000ae09f8cda554009aa970166ff6c105a1eca81b278d48e5cf03] <==
	I1119 22:59:42.609881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:59:42.639987       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:59:42.640141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:59:42.671222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:42.714179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:59:42.716198       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:59:42.716691       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-841969_0fa6383e-a09b-44d9-8f13-0eadd1831deb!
	I1119 22:59:42.719617       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"436ac8e0-d182-4d76-a461-e7e8abb5704d", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-841969_0fa6383e-a09b-44d9-8f13-0eadd1831deb became leader
	W1119 22:59:42.775889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:42.796327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:59:42.817273       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-841969_0fa6383e-a09b-44d9-8f13-0eadd1831deb!
	W1119 22:59:44.801408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:44.814098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:46.817522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:46.824675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:48.827353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:48.831637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:50.834584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:50.841506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:52.844914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:52.850034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:54.852718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:59:54.860327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-841969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-044665 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-044665 --alsologtostderr -v=1: exit status 80 (2.672564539s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-044665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:01:00.942999 1077750 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:01:00.943229 1077750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:00.943260 1077750 out.go:374] Setting ErrFile to fd 2...
	I1119 23:01:00.943281 1077750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:00.943561 1077750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:01:00.943873 1077750 out.go:368] Setting JSON to false
	I1119 23:01:00.943929 1077750 mustload.go:66] Loading cluster: embed-certs-044665
	I1119 23:01:00.944350 1077750 config.go:182] Loaded profile config "embed-certs-044665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:00.944840 1077750 cli_runner.go:164] Run: docker container inspect embed-certs-044665 --format={{.State.Status}}
	I1119 23:01:00.964485 1077750 host.go:66] Checking if "embed-certs-044665" exists ...
	I1119 23:01:00.964905 1077750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:01.050503 1077750 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 23:01:01.038786395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:01.051457 1077750 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-044665 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 23:01:01.057071 1077750 out.go:179] * Pausing node embed-certs-044665 ... 
	I1119 23:01:01.063009 1077750 host.go:66] Checking if "embed-certs-044665" exists ...
	I1119 23:01:01.063371 1077750 ssh_runner.go:195] Run: systemctl --version
	I1119 23:01:01.063417 1077750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-044665
	I1119 23:01:01.103688 1077750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33871 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/embed-certs-044665/id_rsa Username:docker}
	I1119 23:01:01.211663 1077750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:01:01.240063 1077750 pause.go:52] kubelet running: true
	I1119 23:01:01.240140 1077750 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:01:01.555722 1077750 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:01:01.555854 1077750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:01:01.652061 1077750 cri.go:89] found id: "046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7"
	I1119 23:01:01.652086 1077750 cri.go:89] found id: "b694fb8148d95bc7d6e5da0a9295bccb19e7dfad4fe732fbd9a131704b9a740e"
	I1119 23:01:01.652091 1077750 cri.go:89] found id: "6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242"
	I1119 23:01:01.652095 1077750 cri.go:89] found id: "9295ca087f37c40e52f5e9c205c549117945e572b801bd5688abb137d331094c"
	I1119 23:01:01.652099 1077750 cri.go:89] found id: "27d9d65afc659c5d60c9be89510e7bf466b92560a75a4d3e7dc76e48b5f8a603"
	I1119 23:01:01.652102 1077750 cri.go:89] found id: "3e4714d2eeb4d37d4b62db18654ed28db444c97903232f09cd78f3e6313a061d"
	I1119 23:01:01.652105 1077750 cri.go:89] found id: "238c5d17777e8d2d8923962dd5ffbc314c8b532c9ab3fc173611b6116da486cc"
	I1119 23:01:01.652108 1077750 cri.go:89] found id: "b5283b9195f2300b0f3bf25c9e7583045f11dde81a5fd0910a5da6bb40682d25"
	I1119 23:01:01.652112 1077750 cri.go:89] found id: "a525567ed9ba51753bb6c93d24d09f8268b5b85b4ad3edd8a3d369d63acfb629"
	I1119 23:01:01.652120 1077750 cri.go:89] found id: "78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	I1119 23:01:01.652124 1077750 cri.go:89] found id: "da3c06bf1658f0eaaa170dc1d4b37e9a0e6138c18f8111f4802b6a78c62ac707"
	I1119 23:01:01.652128 1077750 cri.go:89] found id: ""
	I1119 23:01:01.652183 1077750 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:01:01.675746 1077750 retry.go:31] will retry after 151.783717ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:01Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:01:01.828097 1077750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:01:01.842160 1077750 pause.go:52] kubelet running: false
	I1119 23:01:01.842253 1077750 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:01:02.015807 1077750 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:01:02.015921 1077750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:01:02.091423 1077750 cri.go:89] found id: "046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7"
	I1119 23:01:02.091448 1077750 cri.go:89] found id: "b694fb8148d95bc7d6e5da0a9295bccb19e7dfad4fe732fbd9a131704b9a740e"
	I1119 23:01:02.091453 1077750 cri.go:89] found id: "6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242"
	I1119 23:01:02.091457 1077750 cri.go:89] found id: "9295ca087f37c40e52f5e9c205c549117945e572b801bd5688abb137d331094c"
	I1119 23:01:02.091460 1077750 cri.go:89] found id: "27d9d65afc659c5d60c9be89510e7bf466b92560a75a4d3e7dc76e48b5f8a603"
	I1119 23:01:02.091464 1077750 cri.go:89] found id: "3e4714d2eeb4d37d4b62db18654ed28db444c97903232f09cd78f3e6313a061d"
	I1119 23:01:02.091467 1077750 cri.go:89] found id: "238c5d17777e8d2d8923962dd5ffbc314c8b532c9ab3fc173611b6116da486cc"
	I1119 23:01:02.091472 1077750 cri.go:89] found id: "b5283b9195f2300b0f3bf25c9e7583045f11dde81a5fd0910a5da6bb40682d25"
	I1119 23:01:02.091476 1077750 cri.go:89] found id: "a525567ed9ba51753bb6c93d24d09f8268b5b85b4ad3edd8a3d369d63acfb629"
	I1119 23:01:02.091483 1077750 cri.go:89] found id: "78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	I1119 23:01:02.091486 1077750 cri.go:89] found id: "da3c06bf1658f0eaaa170dc1d4b37e9a0e6138c18f8111f4802b6a78c62ac707"
	I1119 23:01:02.091489 1077750 cri.go:89] found id: ""
	I1119 23:01:02.091541 1077750 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:01:02.103194 1077750 retry.go:31] will retry after 282.871492ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:02Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:01:02.386805 1077750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:01:02.408093 1077750 pause.go:52] kubelet running: false
	I1119 23:01:02.408208 1077750 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:01:02.584893 1077750 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:01:02.585015 1077750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:01:02.657564 1077750 cri.go:89] found id: "046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7"
	I1119 23:01:02.657625 1077750 cri.go:89] found id: "b694fb8148d95bc7d6e5da0a9295bccb19e7dfad4fe732fbd9a131704b9a740e"
	I1119 23:01:02.657647 1077750 cri.go:89] found id: "6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242"
	I1119 23:01:02.657671 1077750 cri.go:89] found id: "9295ca087f37c40e52f5e9c205c549117945e572b801bd5688abb137d331094c"
	I1119 23:01:02.657697 1077750 cri.go:89] found id: "27d9d65afc659c5d60c9be89510e7bf466b92560a75a4d3e7dc76e48b5f8a603"
	I1119 23:01:02.657726 1077750 cri.go:89] found id: "3e4714d2eeb4d37d4b62db18654ed28db444c97903232f09cd78f3e6313a061d"
	I1119 23:01:02.657750 1077750 cri.go:89] found id: "238c5d17777e8d2d8923962dd5ffbc314c8b532c9ab3fc173611b6116da486cc"
	I1119 23:01:02.657768 1077750 cri.go:89] found id: "b5283b9195f2300b0f3bf25c9e7583045f11dde81a5fd0910a5da6bb40682d25"
	I1119 23:01:02.657789 1077750 cri.go:89] found id: "a525567ed9ba51753bb6c93d24d09f8268b5b85b4ad3edd8a3d369d63acfb629"
	I1119 23:01:02.657820 1077750 cri.go:89] found id: "78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	I1119 23:01:02.657844 1077750 cri.go:89] found id: "da3c06bf1658f0eaaa170dc1d4b37e9a0e6138c18f8111f4802b6a78c62ac707"
	I1119 23:01:02.657866 1077750 cri.go:89] found id: ""
	I1119 23:01:02.657954 1077750 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:01:02.670458 1077750 retry.go:31] will retry after 533.319883ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:02Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:01:03.204195 1077750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:01:03.218460 1077750 pause.go:52] kubelet running: false
	I1119 23:01:03.218530 1077750 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:01:03.405090 1077750 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:01:03.405175 1077750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:01:03.478236 1077750 cri.go:89] found id: "046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7"
	I1119 23:01:03.478311 1077750 cri.go:89] found id: "b694fb8148d95bc7d6e5da0a9295bccb19e7dfad4fe732fbd9a131704b9a740e"
	I1119 23:01:03.478332 1077750 cri.go:89] found id: "6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242"
	I1119 23:01:03.478356 1077750 cri.go:89] found id: "9295ca087f37c40e52f5e9c205c549117945e572b801bd5688abb137d331094c"
	I1119 23:01:03.478391 1077750 cri.go:89] found id: "27d9d65afc659c5d60c9be89510e7bf466b92560a75a4d3e7dc76e48b5f8a603"
	I1119 23:01:03.478420 1077750 cri.go:89] found id: "3e4714d2eeb4d37d4b62db18654ed28db444c97903232f09cd78f3e6313a061d"
	I1119 23:01:03.478445 1077750 cri.go:89] found id: "238c5d17777e8d2d8923962dd5ffbc314c8b532c9ab3fc173611b6116da486cc"
	I1119 23:01:03.478475 1077750 cri.go:89] found id: "b5283b9195f2300b0f3bf25c9e7583045f11dde81a5fd0910a5da6bb40682d25"
	I1119 23:01:03.478517 1077750 cri.go:89] found id: "a525567ed9ba51753bb6c93d24d09f8268b5b85b4ad3edd8a3d369d63acfb629"
	I1119 23:01:03.478556 1077750 cri.go:89] found id: "78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	I1119 23:01:03.478575 1077750 cri.go:89] found id: "da3c06bf1658f0eaaa170dc1d4b37e9a0e6138c18f8111f4802b6a78c62ac707"
	I1119 23:01:03.478604 1077750 cri.go:89] found id: ""
	I1119 23:01:03.478700 1077750 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:01:03.493670 1077750 out.go:203] 
	W1119 23:01:03.496516 1077750 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 23:01:03.496544 1077750 out.go:285] * 
	* 
	W1119 23:01:03.503562 1077750 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 23:01:03.506345 1077750 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-044665 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-044665
helpers_test.go:243: (dbg) docker inspect embed-certs-044665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be",
	        "Created": "2025-11-19T22:58:06.768832725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1073213,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:59:57.177459051Z",
	            "FinishedAt": "2025-11-19T22:59:56.149428303Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/hosts",
	        "LogPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be-json.log",
	        "Name": "/embed-certs-044665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-044665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-044665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be",
	                "LowerDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-044665",
	                "Source": "/var/lib/docker/volumes/embed-certs-044665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-044665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-044665",
	                "name.minikube.sigs.k8s.io": "embed-certs-044665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02dd480a06e594f3a87e5352ed6ea7a567c37ca4f9fce5590a05e5e4927a8521",
	            "SandboxKey": "/var/run/docker/netns/02dd480a06e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33875"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33874"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-044665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:20:8a:2e:d3:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15bc9118c71b109d30f6317e5a328a97bacbdfe5f367a0001ea8dd4fc8a13fe9",
	                    "EndpointID": "1e183200f9d0779e6f73b36747e3c395d268dc781cc2befdfbad9eb6f3f148d8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-044665",
	                        "c2d8d721c15d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665: exit status 2 (350.936379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-044665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-044665 logs -n 25: (1.351125736s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                              │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                          │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                               │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                              │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ delete  │ -p no-preload-018508                                                                                                                                                     │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p no-preload-018508                                                                                                                                                     │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-553369                                                                                                                                          │ disable-driver-mounts-553369 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                             │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-841969 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ embed-certs-044665 image list --format=json                                                                                                                              │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p embed-certs-044665 --alsologtostderr -v=1                                                                                                                             │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:00:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:00:09.020742 1074858 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:00:09.020990 1074858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:00:09.021018 1074858 out.go:374] Setting ErrFile to fd 2...
	I1119 23:00:09.021039 1074858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:00:09.021355 1074858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:00:09.021798 1074858 out.go:368] Setting JSON to false
	I1119 23:00:09.022942 1074858 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16938,"bootTime":1763576271,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 23:00:09.023055 1074858 start.go:143] virtualization:  
	I1119 23:00:09.025982 1074858 out.go:179] * [default-k8s-diff-port-841969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 23:00:09.029727 1074858 notify.go:221] Checking for updates...
	I1119 23:00:09.032801 1074858 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:00:09.036194 1074858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:00:09.039121 1074858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:00:09.042562 1074858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 23:00:09.045340 1074858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 23:00:09.048233 1074858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:00:09.051653 1074858 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:00:09.052294 1074858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:00:09.105938 1074858 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 23:00:09.106057 1074858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:00:09.220239 1074858 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 23:00:09.202141977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:00:09.220342 1074858 docker.go:319] overlay module found
	I1119 23:00:09.223812 1074858 out.go:179] * Using the docker driver based on existing profile
	I1119 23:00:09.226625 1074858 start.go:309] selected driver: docker
	I1119 23:00:09.226646 1074858 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:00:09.226753 1074858 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:00:09.227448 1074858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:00:09.343252 1074858 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 23:00:09.330502625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:00:09.343595 1074858 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:00:09.343621 1074858 cni.go:84] Creating CNI manager for ""
	I1119 23:00:09.343677 1074858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:00:09.343716 1074858 start.go:353] cluster config:
	{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:00:09.347073 1074858 out.go:179] * Starting "default-k8s-diff-port-841969" primary control-plane node in "default-k8s-diff-port-841969" cluster
	I1119 23:00:09.349860 1074858 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 23:00:09.352747 1074858 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 23:00:09.355497 1074858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:00:09.355549 1074858 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 23:00:09.355569 1074858 cache.go:65] Caching tarball of preloaded images
	I1119 23:00:09.355659 1074858 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 23:00:09.355675 1074858 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:00:09.355803 1074858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 23:00:09.356013 1074858 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 23:00:09.387733 1074858 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 23:00:09.387759 1074858 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 23:00:09.387773 1074858 cache.go:243] Successfully downloaded all kic artifacts
	I1119 23:00:09.387796 1074858 start.go:360] acquireMachinesLock for default-k8s-diff-port-841969: {Name:mke5d323374b95cff07c96188997ebbdcf73748f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:00:09.387852 1074858 start.go:364] duration metric: took 35.889µs to acquireMachinesLock for "default-k8s-diff-port-841969"
	I1119 23:00:09.387877 1074858 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:00:09.387887 1074858 fix.go:54] fixHost starting: 
	I1119 23:00:09.388155 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:09.418056 1074858 fix.go:112] recreateIfNeeded on default-k8s-diff-port-841969: state=Stopped err=<nil>
	W1119 23:00:09.418086 1074858 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:00:12.753460 1073084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.822361931s)
	I1119 23:00:12.753520 1073084 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.712151143s)
	I1119 23:00:12.753557 1073084 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044665" to be "Ready" ...
	I1119 23:00:12.753885 1073084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.708502908s)
	I1119 23:00:12.754149 1073084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.283715793s)
	I1119 23:00:12.757939 1073084 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-044665 addons enable metrics-server
	
	I1119 23:00:12.787718 1073084 node_ready.go:49] node "embed-certs-044665" is "Ready"
	I1119 23:00:12.787803 1073084 node_ready.go:38] duration metric: took 34.223105ms for node "embed-certs-044665" to be "Ready" ...
	I1119 23:00:12.787835 1073084 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:00:12.787974 1073084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:00:12.803596 1073084 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 23:00:09.421229 1074858 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-841969" ...
	I1119 23:00:09.421311 1074858 cli_runner.go:164] Run: docker start default-k8s-diff-port-841969
	I1119 23:00:09.807885 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:09.840733 1074858 kic.go:430] container "default-k8s-diff-port-841969" state is running.
	I1119 23:00:09.841105 1074858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 23:00:09.870674 1074858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 23:00:09.870945 1074858 machine.go:94] provisionDockerMachine start ...
	I1119 23:00:09.871025 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:09.899933 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:09.900267 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:09.900278 1074858 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:00:09.903104 1074858 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 23:00:13.070921 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 23:00:13.070946 1074858 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-841969"
	I1119 23:00:13.071045 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.095014 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:13.095329 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:13.095345 1074858 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-841969 && echo "default-k8s-diff-port-841969" | sudo tee /etc/hostname
	I1119 23:00:13.255346 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 23:00:13.255453 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.289266 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:13.289576 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:13.289603 1074858 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-841969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-841969/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-841969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:00:13.451854 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:00:13.451958 1074858 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 23:00:13.451995 1074858 ubuntu.go:190] setting up certificates
	I1119 23:00:13.452032 1074858 provision.go:84] configureAuth start
	I1119 23:00:13.452128 1074858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 23:00:13.478196 1074858 provision.go:143] copyHostCerts
	I1119 23:00:13.478271 1074858 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 23:00:13.478286 1074858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 23:00:13.478364 1074858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 23:00:13.478467 1074858 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 23:00:13.478473 1074858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 23:00:13.478499 1074858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 23:00:13.478560 1074858 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 23:00:13.478564 1074858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 23:00:13.478588 1074858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 23:00:13.478643 1074858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-841969 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-841969 localhost minikube]
	I1119 23:00:13.787877 1074858 provision.go:177] copyRemoteCerts
	I1119 23:00:13.787949 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:00:13.788001 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.806423 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:13.908876 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:00:13.929629 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 23:00:13.949398 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:00:13.969582 1074858 provision.go:87] duration metric: took 517.509588ms to configureAuth
	I1119 23:00:13.969608 1074858 ubuntu.go:206] setting minikube options for container-runtime
	I1119 23:00:13.969808 1074858 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:00:13.969912 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.987183 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:13.987492 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:13.987513 1074858 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:00:14.344022 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:00:14.344042 1074858 machine.go:97] duration metric: took 4.473085435s to provisionDockerMachine
	I1119 23:00:14.344054 1074858 start.go:293] postStartSetup for "default-k8s-diff-port-841969" (driver="docker")
	I1119 23:00:14.344081 1074858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:00:14.344146 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:00:14.344193 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.368369 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.479106 1074858 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:00:14.482629 1074858 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 23:00:14.482661 1074858 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 23:00:14.482673 1074858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 23:00:14.482730 1074858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 23:00:14.482820 1074858 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 23:00:14.482978 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:00:14.491476 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:00:14.512213 1074858 start.go:296] duration metric: took 168.143867ms for postStartSetup
	I1119 23:00:14.512297 1074858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:00:14.512336 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.533479 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.648894 1074858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 23:00:14.654378 1074858 fix.go:56] duration metric: took 5.266483292s for fixHost
	I1119 23:00:14.654401 1074858 start.go:83] releasing machines lock for "default-k8s-diff-port-841969", held for 5.266535361s
	I1119 23:00:14.654485 1074858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 23:00:14.671918 1074858 ssh_runner.go:195] Run: cat /version.json
	I1119 23:00:14.671952 1074858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:00:14.671968 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.672012 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.697526 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.703863 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.802593 1074858 ssh_runner.go:195] Run: systemctl --version
	I1119 23:00:14.930958 1074858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:00:14.972376 1074858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:00:14.976727 1074858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:00:14.976823 1074858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:00:14.984618 1074858 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 23:00:14.984644 1074858 start.go:496] detecting cgroup driver to use...
	I1119 23:00:14.984675 1074858 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 23:00:14.984738 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:00:15.001068 1074858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:00:15.027723 1074858 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:00:15.027795 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:00:15.046520 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:00:15.061871 1074858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:00:15.239156 1074858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:00:15.412023 1074858 docker.go:234] disabling docker service ...
	I1119 23:00:15.412106 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:00:15.428499 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:00:15.444141 1074858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:00:15.609325 1074858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:00:15.759499 1074858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:00:15.772961 1074858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:00:15.789145 1074858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:00:15.789287 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.798719 1074858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:00:15.798848 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.808556 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.818235 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.827589 1074858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:00:15.836394 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.846292 1074858 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.855513 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.864558 1074858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:00:15.872270 1074858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:00:15.880152 1074858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:00:16.012266 1074858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:00:16.228199 1074858 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:00:16.228295 1074858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:00:16.234085 1074858 start.go:564] Will wait 60s for crictl version
	I1119 23:00:16.234174 1074858 ssh_runner.go:195] Run: which crictl
	I1119 23:00:16.237860 1074858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 23:00:16.268543 1074858 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 23:00:16.268696 1074858 ssh_runner.go:195] Run: crio --version
	I1119 23:00:16.300260 1074858 ssh_runner.go:195] Run: crio --version
	I1119 23:00:16.341813 1074858 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 23:00:12.806716 1073084 addons.go:515] duration metric: took 7.197426313s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 23:00:12.808256 1073084 api_server.go:72] duration metric: took 7.199534908s to wait for apiserver process to appear ...
	I1119 23:00:12.808319 1073084 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:00:12.808354 1073084 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:00:12.822931 1073084 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:00:12.822964 1073084 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:00:13.308463 1073084 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:00:13.323910 1073084 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 23:00:13.326095 1073084 api_server.go:141] control plane version: v1.34.1
	I1119 23:00:13.326121 1073084 api_server.go:131] duration metric: took 517.781335ms to wait for apiserver health ...
	I1119 23:00:13.326130 1073084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:00:13.332538 1073084 system_pods.go:59] 8 kube-system pods found
	I1119 23:00:13.332574 1073084 system_pods.go:61] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:13.332585 1073084 system_pods.go:61] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:13.332592 1073084 system_pods.go:61] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 23:00:13.332599 1073084 system_pods.go:61] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:00:13.332607 1073084 system_pods.go:61] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:13.332612 1073084 system_pods.go:61] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 23:00:13.332619 1073084 system_pods.go:61] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:13.332623 1073084 system_pods.go:61] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Running
	I1119 23:00:13.332629 1073084 system_pods.go:74] duration metric: took 6.494691ms to wait for pod list to return data ...
	I1119 23:00:13.332638 1073084 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:00:13.344908 1073084 default_sa.go:45] found service account: "default"
	I1119 23:00:13.344933 1073084 default_sa.go:55] duration metric: took 12.288519ms for default service account to be created ...
	I1119 23:00:13.344946 1073084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:00:13.351382 1073084 system_pods.go:86] 8 kube-system pods found
	I1119 23:00:13.351469 1073084 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:13.351496 1073084 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:13.351540 1073084 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 23:00:13.351589 1073084 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:00:13.351671 1073084 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:13.351699 1073084 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 23:00:13.351753 1073084 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:13.351782 1073084 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Running
	I1119 23:00:13.351806 1073084 system_pods.go:126] duration metric: took 6.849902ms to wait for k8s-apps to be running ...
	I1119 23:00:13.351849 1073084 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:00:13.351940 1073084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:00:13.368059 1073084 system_svc.go:56] duration metric: took 16.200854ms WaitForService to wait for kubelet
	I1119 23:00:13.368139 1073084 kubeadm.go:587] duration metric: took 7.759418617s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:00:13.368193 1073084 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:00:13.377165 1073084 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 23:00:13.377250 1073084 node_conditions.go:123] node cpu capacity is 2
	I1119 23:00:13.377279 1073084 node_conditions.go:105] duration metric: took 9.067249ms to run NodePressure ...
	I1119 23:00:13.377320 1073084 start.go:242] waiting for startup goroutines ...
	I1119 23:00:13.377348 1073084 start.go:247] waiting for cluster config update ...
	I1119 23:00:13.377376 1073084 start.go:256] writing updated cluster config ...
	I1119 23:00:13.377745 1073084 ssh_runner.go:195] Run: rm -f paused
	I1119 23:00:13.385971 1073084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:00:13.393072 1073084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 23:00:15.404828 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:16.345019 1074858 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:00:16.363136 1074858 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 23:00:16.371935 1074858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:00:16.386285 1074858 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:00:16.386402 1074858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:00:16.386461 1074858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:00:16.435078 1074858 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:00:16.435104 1074858 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:00:16.435165 1074858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:00:16.472086 1074858 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:00:16.472109 1074858 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:00:16.472117 1074858 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 23:00:16.472222 1074858 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-841969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:00:16.472314 1074858 ssh_runner.go:195] Run: crio config
	I1119 23:00:16.542388 1074858 cni.go:84] Creating CNI manager for ""
	I1119 23:00:16.542414 1074858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:00:16.542437 1074858 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:00:16.542462 1074858 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-841969 NodeName:default-k8s-diff-port-841969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:00:16.542636 1074858 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-841969"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:00:16.542724 1074858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:00:16.552045 1074858 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:00:16.552148 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 23:00:16.561913 1074858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 23:00:16.579277 1074858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:00:16.601691 1074858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 23:00:16.617785 1074858 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 23:00:16.622496 1074858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:00:16.635990 1074858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:00:16.840441 1074858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:00:16.857266 1074858 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969 for IP: 192.168.85.2
	I1119 23:00:16.857337 1074858 certs.go:195] generating shared ca certs ...
	I1119 23:00:16.857368 1074858 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:16.857540 1074858 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 23:00:16.857629 1074858 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 23:00:16.857667 1074858 certs.go:257] generating profile certs ...
	I1119 23:00:16.857830 1074858 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key
	I1119 23:00:16.857934 1074858 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d
	I1119 23:00:16.858033 1074858 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key
	I1119 23:00:16.858205 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 23:00:16.858274 1074858 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 23:00:16.858313 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 23:00:16.858366 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:00:16.858427 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:00:16.858504 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 23:00:16.858596 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:00:16.859475 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:00:16.901627 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:00:16.959021 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:00:17.004833 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 23:00:17.071262 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:00:17.140787 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 23:00:17.184207 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:00:17.235348 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 23:00:17.290628 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 23:00:17.327432 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 23:00:17.357285 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:00:17.400492 1074858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:00:17.417658 1074858 ssh_runner.go:195] Run: openssl version
	I1119 23:00:17.429328 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 23:00:17.441251 1074858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 23:00:17.445664 1074858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 23:00:17.445728 1074858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 23:00:17.498209 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 23:00:17.508557 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 23:00:17.521006 1074858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 23:00:17.528758 1074858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 23:00:17.528904 1074858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 23:00:17.583832 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:00:17.595731 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:00:17.610222 1074858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:00:17.615218 1074858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:00:17.615286 1074858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:00:17.664533 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:00:17.683725 1074858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:00:17.701809 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:00:17.792528 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:00:17.906788 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:00:18.021833 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:00:18.236337 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:00:18.369660 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:00:18.463099 1074858 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:00:18.463253 1074858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:00:18.463361 1074858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:00:18.528443 1074858 cri.go:89] found id: "52bfb6272ad18315a205a597275a2908c50905792855fa02c474eb334dde7033"
	I1119 23:00:18.528520 1074858 cri.go:89] found id: "0f65aa748a61e533d7004d796a63a2ca937a30f669219b777e78d681df3e741a"
	I1119 23:00:18.528550 1074858 cri.go:89] found id: "868c86f80fac77993d4e41965587d02dd422bfe189be2a35461673dd2cfa1aef"
	I1119 23:00:18.528572 1074858 cri.go:89] found id: "ac53ca3801483eeadc872ef523a919c2a27248a87fcca348b3677b815e5cdc99"
	I1119 23:00:18.528610 1074858 cri.go:89] found id: ""
	I1119 23:00:18.528691 1074858 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 23:00:18.551036 1074858 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:00:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:00:18.551165 1074858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:00:18.573009 1074858 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:00:18.573082 1074858 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:00:18.573164 1074858 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:00:18.593692 1074858 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:00:18.594649 1074858 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-841969" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:00:18.595360 1074858 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-860325/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-841969" cluster setting kubeconfig missing "default-k8s-diff-port-841969" context setting]
	I1119 23:00:18.596373 1074858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:18.610979 1074858 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:00:18.642442 1074858 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 23:00:18.642541 1074858 kubeadm.go:602] duration metric: took 69.423508ms to restartPrimaryControlPlane
	I1119 23:00:18.642570 1074858 kubeadm.go:403] duration metric: took 179.493952ms to StartCluster
	I1119 23:00:18.642618 1074858 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:18.642722 1074858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:00:18.644579 1074858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:18.644907 1074858 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:00:18.645289 1074858 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:00:18.645364 1074858 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-841969"
	I1119 23:00:18.645378 1074858 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-841969"
	W1119 23:00:18.645384 1074858 addons.go:248] addon storage-provisioner should already be in state true
	I1119 23:00:18.645407 1074858 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:00:18.645921 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.646371 1074858 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:00:18.646469 1074858 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-841969"
	I1119 23:00:18.646513 1074858 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-841969"
	I1119 23:00:18.646844 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.647114 1074858 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-841969"
	I1119 23:00:18.647151 1074858 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-841969"
	W1119 23:00:18.647190 1074858 addons.go:248] addon dashboard should already be in state true
	I1119 23:00:18.647241 1074858 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:00:18.648035 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.655409 1074858 out.go:179] * Verifying Kubernetes components...
	I1119 23:00:18.662202 1074858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:00:18.698572 1074858 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:00:18.701619 1074858 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:00:18.701641 1074858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:00:18.701716 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:18.716878 1074858 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-841969"
	W1119 23:00:18.716900 1074858 addons.go:248] addon default-storageclass should already be in state true
	I1119 23:00:18.716925 1074858 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:00:18.717347 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.730978 1074858 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 23:00:18.734981 1074858 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 23:00:18.738294 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 23:00:18.738320 1074858 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 23:00:18.738401 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:18.740892 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:18.765061 1074858 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:00:18.765085 1074858 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:00:18.765152 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:18.780394 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:18.793892 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	W1119 23:00:17.898107 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:20.398274 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:19.084636 1074858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:00:19.155398 1074858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:00:19.176391 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 23:00:19.176413 1074858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 23:00:19.233624 1074858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:00:19.295150 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 23:00:19.295221 1074858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 23:00:19.383339 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 23:00:19.383421 1074858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 23:00:19.495956 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 23:00:19.496018 1074858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 23:00:19.598302 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 23:00:19.598375 1074858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 23:00:19.664625 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 23:00:19.664654 1074858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 23:00:19.692345 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 23:00:19.692369 1074858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 23:00:19.714664 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 23:00:19.714686 1074858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 23:00:19.739215 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 23:00:19.739241 1074858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 23:00:19.770733 1074858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1119 23:00:22.401444 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:24.402728 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:26.905138 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:28.726300 1074858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.641628971s)
	I1119 23:00:28.726366 1074858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.570949222s)
	I1119 23:00:28.726664 1074858 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.493015204s)
	I1119 23:00:28.726694 1074858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-841969" to be "Ready" ...
	I1119 23:00:28.727016 1074858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.956236081s)
	I1119 23:00:28.730420 1074858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-841969 addons enable metrics-server
	
	I1119 23:00:28.777741 1074858 node_ready.go:49] node "default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:28.777766 1074858 node_ready.go:38] duration metric: took 51.050642ms for node "default-k8s-diff-port-841969" to be "Ready" ...
	I1119 23:00:28.777779 1074858 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:00:28.777839 1074858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:00:28.805985 1074858 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 23:00:28.808903 1074858 addons.go:515] duration metric: took 10.163593469s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 23:00:28.825119 1074858 api_server.go:72] duration metric: took 10.180127889s to wait for apiserver process to appear ...
	I1119 23:00:28.825147 1074858 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:00:28.825166 1074858 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1119 23:00:28.853722 1074858 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1119 23:00:28.857096 1074858 api_server.go:141] control plane version: v1.34.1
	I1119 23:00:28.857128 1074858 api_server.go:131] duration metric: took 31.973161ms to wait for apiserver health ...
	I1119 23:00:28.857137 1074858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:00:28.870221 1074858 system_pods.go:59] 8 kube-system pods found
	I1119 23:00:28.870263 1074858 system_pods.go:61] "coredns-66bc5c9577-zkjxn" [1c4a619c-0219-4f38-897a-d3989d4d3ed9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:28.870274 1074858 system_pods.go:61] "etcd-default-k8s-diff-port-841969" [3c8643e2-2f77-48bb-86fd-832d737dc91d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:28.870290 1074858 system_pods.go:61] "kindnet-65cjg" [c756094a-14e9-41ea-b7a7-0af539154203] Running
	I1119 23:00:28.870299 1074858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-841969" [54a2fc1c-cce5-430e-9ffa-c1ef86387118] Running
	I1119 23:00:28.870307 1074858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-841969" [9cceb253-eba3-4c8e-8d84-acbd3924c0f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:28.870318 1074858 system_pods.go:61] "kube-proxy-fbmdp" [ef28c6ce-40e6-411e-b7e0-1a6b5914c710] Running
	I1119 23:00:28.870327 1074858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-841969" [0dea9f40-de0a-460e-862e-98beaf3e8971] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:28.870332 1074858 system_pods.go:61] "storage-provisioner" [c79703f3-5114-46df-8d46-987b4a56f647] Running
	I1119 23:00:28.870343 1074858 system_pods.go:74] duration metric: took 13.199493ms to wait for pod list to return data ...
	I1119 23:00:28.870351 1074858 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:00:28.883364 1074858 default_sa.go:45] found service account: "default"
	I1119 23:00:28.883394 1074858 default_sa.go:55] duration metric: took 13.026724ms for default service account to be created ...
	I1119 23:00:28.883415 1074858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:00:28.969486 1074858 system_pods.go:86] 8 kube-system pods found
	I1119 23:00:28.969530 1074858 system_pods.go:89] "coredns-66bc5c9577-zkjxn" [1c4a619c-0219-4f38-897a-d3989d4d3ed9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:28.969541 1074858 system_pods.go:89] "etcd-default-k8s-diff-port-841969" [3c8643e2-2f77-48bb-86fd-832d737dc91d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:28.969547 1074858 system_pods.go:89] "kindnet-65cjg" [c756094a-14e9-41ea-b7a7-0af539154203] Running
	I1119 23:00:28.969554 1074858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-841969" [54a2fc1c-cce5-430e-9ffa-c1ef86387118] Running
	I1119 23:00:28.969563 1074858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-841969" [9cceb253-eba3-4c8e-8d84-acbd3924c0f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:28.969580 1074858 system_pods.go:89] "kube-proxy-fbmdp" [ef28c6ce-40e6-411e-b7e0-1a6b5914c710] Running
	I1119 23:00:28.969597 1074858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-841969" [0dea9f40-de0a-460e-862e-98beaf3e8971] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:28.969607 1074858 system_pods.go:89] "storage-provisioner" [c79703f3-5114-46df-8d46-987b4a56f647] Running
	I1119 23:00:28.969614 1074858 system_pods.go:126] duration metric: took 86.193535ms to wait for k8s-apps to be running ...
	I1119 23:00:28.969626 1074858 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:00:28.969693 1074858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:00:29.008347 1074858 system_svc.go:56] duration metric: took 38.708166ms WaitForService to wait for kubelet
	I1119 23:00:29.008380 1074858 kubeadm.go:587] duration metric: took 10.363408456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:00:29.008410 1074858 node_conditions.go:102] verifying NodePressure condition ...
	W1119 23:00:29.400959 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:31.412284 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:29.082849 1074858 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 23:00:29.082900 1074858 node_conditions.go:123] node cpu capacity is 2
	I1119 23:00:29.082913 1074858 node_conditions.go:105] duration metric: took 74.496947ms to run NodePressure ...
	I1119 23:00:29.082924 1074858 start.go:242] waiting for startup goroutines ...
	I1119 23:00:29.082931 1074858 start.go:247] waiting for cluster config update ...
	I1119 23:00:29.082942 1074858 start.go:256] writing updated cluster config ...
	I1119 23:00:29.083262 1074858 ssh_runner.go:195] Run: rm -f paused
	I1119 23:00:29.090358 1074858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:00:29.126086 1074858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkjxn" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 23:00:31.133600 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:33.633087 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:33.903554 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:36.400815 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:36.133948 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:38.632706 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:38.401713 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:40.899717 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:41.131390 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:43.132968 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:43.404573 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:45.899759 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:47.399988 1073084 pod_ready.go:94] pod "coredns-66bc5c9577-kcs7v" is "Ready"
	I1119 23:00:47.400013 1073084 pod_ready.go:86] duration metric: took 34.006868176s for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.403216 1073084 pod_ready.go:83] waiting for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.408596 1073084 pod_ready.go:94] pod "etcd-embed-certs-044665" is "Ready"
	I1119 23:00:47.408625 1073084 pod_ready.go:86] duration metric: took 5.386359ms for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.411131 1073084 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.415883 1073084 pod_ready.go:94] pod "kube-apiserver-embed-certs-044665" is "Ready"
	I1119 23:00:47.415910 1073084 pod_ready.go:86] duration metric: took 4.751497ms for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.418800 1073084 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.597717 1073084 pod_ready.go:94] pod "kube-controller-manager-embed-certs-044665" is "Ready"
	I1119 23:00:47.597789 1073084 pod_ready.go:86] duration metric: took 178.962719ms for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.798098 1073084 pod_ready.go:83] waiting for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.198015 1073084 pod_ready.go:94] pod "kube-proxy-w5t4l" is "Ready"
	I1119 23:00:48.198045 1073084 pod_ready.go:86] duration metric: took 399.918409ms for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.397510 1073084 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.797787 1073084 pod_ready.go:94] pod "kube-scheduler-embed-certs-044665" is "Ready"
	I1119 23:00:48.797813 1073084 pod_ready.go:86] duration metric: took 400.226892ms for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.797830 1073084 pod_ready.go:40] duration metric: took 35.411771544s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:00:48.855248 1073084 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 23:00:48.860175 1073084 out.go:179] * Done! kubectl is now configured to use "embed-certs-044665" cluster and "default" namespace by default
	W1119 23:00:45.135504 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:47.631201 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:49.631507 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:52.132022 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:54.632097 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:56.632198 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:58.633262 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	I1119 23:00:59.137630 1074858 pod_ready.go:94] pod "coredns-66bc5c9577-zkjxn" is "Ready"
	I1119 23:00:59.137665 1074858 pod_ready.go:86] duration metric: took 30.011541039s for pod "coredns-66bc5c9577-zkjxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.151934 1074858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.162737 1074858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:59.162766 1074858 pod_ready.go:86] duration metric: took 10.800623ms for pod "etcd-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.246048 1074858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.251028 1074858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:59.251056 1074858 pod_ready.go:86] duration metric: took 4.977288ms for pod "kube-apiserver-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.253452 1074858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.330766 1074858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:59.330794 1074858 pod_ready.go:86] duration metric: took 77.31636ms for pod "kube-controller-manager-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.531122 1074858 pod_ready.go:83] waiting for pod "kube-proxy-fbmdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.930535 1074858 pod_ready.go:94] pod "kube-proxy-fbmdp" is "Ready"
	I1119 23:00:59.930564 1074858 pod_ready.go:86] duration metric: took 399.411198ms for pod "kube-proxy-fbmdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:01:00.133432 1074858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:01:00.530094 1074858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-841969" is "Ready"
	I1119 23:01:00.530122 1074858 pod_ready.go:86] duration metric: took 396.662196ms for pod "kube-scheduler-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:01:00.530136 1074858 pod_ready.go:40] duration metric: took 31.439732783s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:01:00.651048 1074858 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 23:01:00.656133 1074858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-841969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.14098231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.142351527Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bd0edc3f12faacfde6cc11ada04de3264a1f56d1d40d54ae0e7cbf5c7d55afa5/merged/etc/passwd: no such file or directory"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.142481719Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bd0edc3f12faacfde6cc11ada04de3264a1f56d1d40d54ae0e7cbf5c7d55afa5/merged/etc/group: no such file or directory"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.142939019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.161886548Z" level=info msg="Created container 046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7: kube-system/storage-provisioner/storage-provisioner" id=d14bd0a3-cdf1-48dd-be40-640d80bf04ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.16305496Z" level=info msg="Starting container: 046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7" id=300b9b77-6338-4ab0-93fc-c07865be8baf name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.168161989Z" level=info msg="Started container" PID=1688 containerID=046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7 description=kube-system/storage-provisioner/storage-provisioner id=300b9b77-6338-4ab0-93fc-c07865be8baf name=/runtime.v1.RuntimeService/StartContainer sandboxID=a47836b2b4581676528ec27444e2a42c3870945312935260c03fafdc8447388c
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.824163562Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.834215461Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.834251096Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.83427718Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.838706025Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.838747174Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.838772545Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.842969535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.843005646Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.843029524Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.847189361Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.847226908Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.847251524Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.851545631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.851582284Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.851607621Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.856251582Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.856293043Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	046253f691f1c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   a47836b2b4581       storage-provisioner                          kube-system
	78b59efcdc208       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   7235765cef024       dashboard-metrics-scraper-6ffb444bf9-ppg2g   kubernetes-dashboard
	da3c06bf1658f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   a7d8051a5f46b       kubernetes-dashboard-855c9754f9-z42jm        kubernetes-dashboard
	b694fb8148d95       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   f87c88389694d       coredns-66bc5c9577-kcs7v                     kube-system
	976bca26a5d76       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   5cf4132d4a2da       busybox                                      default
	6990f841c7b94       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   a47836b2b4581       storage-provisioner                          kube-system
	9295ca087f37c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   bce461d12015d       kube-proxy-w5t4l                             kube-system
	27d9d65afc659       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   380fdc90f0545       kindnet-bphl7                                kube-system
	3e4714d2eeb4d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   146913ca75e15       kube-apiserver-embed-certs-044665            kube-system
	238c5d17777e8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   8d839835b1ffb       kube-controller-manager-embed-certs-044665   kube-system
	b5283b9195f23       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   998b3c5782e90       etcd-embed-certs-044665                      kube-system
	a525567ed9ba5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   55cf28c262d22       kube-scheduler-embed-certs-044665            kube-system
	
	
	==> coredns [b694fb8148d95bc7d6e5da0a9295bccb19e7dfad4fe732fbd9a131704b9a740e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50611 - 51893 "HINFO IN 280384622609538922.4130317657295906375. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006045161s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-044665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-044665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-044665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_58_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:58:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-044665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:59:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-044665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                f8def6c5-4626-4320-af5a-5122b8c6bdf4
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-kcs7v                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-044665                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-bphl7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-044665             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-044665    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-w5t4l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-044665             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ppg2g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z42jm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m34s)  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m34s)  kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m34s)  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-044665 event: Registered Node embed-certs-044665 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-044665 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-044665 event: Registered Node embed-certs-044665 in Controller
	
	
	==> dmesg <==
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	[Nov19 23:00] overlayfs: idmapped layers are currently not supported
	[ +12.401560] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b5283b9195f2300b0f3bf25c9e7583045f11dde81a5fd0910a5da6bb40682d25] <==
	{"level":"warn","ts":"2025-11-19T23:00:08.994724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.045765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.083274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.146005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.250497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.266839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.320751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.389927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.424542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.489852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.529972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.547629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.608118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.637250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.667685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.752192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.769878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.790460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.892721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.956735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.023463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.079893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.135922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.172370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.381820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38382","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:04 up  4:43,  0 user,  load average: 2.89, 3.09, 2.59
	Linux embed-certs-044665 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [27d9d65afc659c5d60c9be89510e7bf466b92560a75a4d3e7dc76e48b5f8a603] <==
	I1119 23:00:12.555709       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 23:00:12.620594       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 23:00:12.620756       1 main.go:148] setting mtu 1500 for CNI 
	I1119 23:00:12.620769       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 23:00:12.620785       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T23:00:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 23:00:12.821948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 23:00:12.822042       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 23:00:12.822078       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 23:00:12.823426       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 23:00:42.823062       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 23:00:42.823187       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 23:00:42.823294       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 23:00:42.823351       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1119 23:00:44.123108       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 23:00:44.123171       1 metrics.go:72] Registering metrics
	I1119 23:00:44.123256       1 controller.go:711] "Syncing nftables rules"
	I1119 23:00:52.822947       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 23:00:52.823149       1 main.go:301] handling current node
	I1119 23:01:02.829128       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 23:01:02.829163       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e4714d2eeb4d37d4b62db18654ed28db444c97903232f09cd78f3e6313a061d] <==
	I1119 23:00:11.466612       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 23:00:11.479248       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 23:00:11.480133       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 23:00:11.480172       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 23:00:11.483161       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:00:11.483242       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 23:00:11.503478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:00:11.508792       1 aggregator.go:171] initial CRD sync complete...
	I1119 23:00:11.509526       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 23:00:11.509545       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 23:00:11.509565       1 cache.go:39] Caches are synced for autoregister controller
	I1119 23:00:11.513478       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:00:11.534492       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 23:00:11.548576       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 23:00:11.824471       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 23:00:12.167672       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:00:12.252287       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 23:00:12.343130       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:00:12.410850       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:00:12.438246       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:00:12.594858       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.221.53"}
	I1119 23:00:12.635620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.43.125"}
	I1119 23:00:14.918774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:00:15.215730       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 23:00:15.266780       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [238c5d17777e8d2d8923962dd5ffbc314c8b532c9ab3fc173611b6116da486cc] <==
	I1119 23:00:14.637201       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:00:14.641525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:00:14.644697       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:14.649067       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 23:00:14.651228       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 23:00:14.660470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 23:00:14.661665       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 23:00:14.661786       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 23:00:14.663693       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 23:00:14.663744       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 23:00:14.664739       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 23:00:14.667881       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 23:00:14.668058       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 23:00:14.671396       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:14.679713       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 23:00:14.684952       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 23:00:14.690297       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:00:14.690412       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:00:14.690507       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-044665"
	I1119 23:00:14.690566       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 23:00:14.699382       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:00:14.709556       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:00:14.709805       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:00:14.709847       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:00:15.247481       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [9295ca087f37c40e52f5e9c205c549117945e572b801bd5688abb137d331094c] <==
	I1119 23:00:12.816123       1 server_linux.go:53] "Using iptables proxy"
	I1119 23:00:12.908774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:00:13.023246       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:00:13.023369       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 23:00:13.023494       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:00:13.058428       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 23:00:13.058490       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:00:13.062249       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:00:13.062554       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:00:13.062583       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:13.064059       1 config.go:200] "Starting service config controller"
	I1119 23:00:13.064082       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:00:13.064098       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:00:13.064102       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:00:13.064113       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:00:13.064117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:00:13.064763       1 config.go:309] "Starting node config controller"
	I1119 23:00:13.064780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:00:13.064787       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:00:13.164467       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:00:13.164570       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:00:13.164596       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a525567ed9ba51753bb6c93d24d09f8268b5b85b4ad3edd8a3d369d63acfb629] <==
	I1119 23:00:08.243238       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:00:11.416714       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 23:00:11.416836       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 23:00:11.416871       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:00:11.416901       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:00:11.537186       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:00:11.537211       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:11.539841       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:00:11.539937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:11.539956       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:11.539981       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:00:11.640301       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:00:15 embed-certs-044665 kubelet[788]: I1119 23:00:15.340132     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d5c12c99-8f33-4930-9791-95d621818711-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ppg2g\" (UID: \"d5c12c99-8f33-4930-9791-95d621818711\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g"
	Nov 19 23:00:16 embed-certs-044665 kubelet[788]: W1119 23:00:16.103846     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/crio-7235765cef024f9c7ede4107f8a0e162d201989b2cd2584b73eba708b6e2b20e WatchSource:0}: Error finding container 7235765cef024f9c7ede4107f8a0e162d201989b2cd2584b73eba708b6e2b20e: Status 404 returned error can't find the container with id 7235765cef024f9c7ede4107f8a0e162d201989b2cd2584b73eba708b6e2b20e
	Nov 19 23:00:16 embed-certs-044665 kubelet[788]: W1119 23:00:16.127389     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/crio-a7d8051a5f46b9810a573b692420765b300bc4a20e97ce254ae249b0d4ee219f WatchSource:0}: Error finding container a7d8051a5f46b9810a573b692420765b300bc4a20e97ce254ae249b0d4ee219f: Status 404 returned error can't find the container with id a7d8051a5f46b9810a573b692420765b300bc4a20e97ce254ae249b0d4ee219f
	Nov 19 23:00:17 embed-certs-044665 kubelet[788]: I1119 23:00:17.091373     788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 23:00:23 embed-certs-044665 kubelet[788]: I1119 23:00:23.029103     788 scope.go:117] "RemoveContainer" containerID="80195445b6a46c5c09d6355ce9bba07b84afe900636890d2ed7cbcd675639542"
	Nov 19 23:00:24 embed-certs-044665 kubelet[788]: I1119 23:00:24.040442     788 scope.go:117] "RemoveContainer" containerID="80195445b6a46c5c09d6355ce9bba07b84afe900636890d2ed7cbcd675639542"
	Nov 19 23:00:24 embed-certs-044665 kubelet[788]: I1119 23:00:24.040829     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:24 embed-certs-044665 kubelet[788]: E1119 23:00:24.041015     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:25 embed-certs-044665 kubelet[788]: I1119 23:00:25.052638     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:25 embed-certs-044665 kubelet[788]: E1119 23:00:25.052835     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:26 embed-certs-044665 kubelet[788]: I1119 23:00:26.062911     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:26 embed-certs-044665 kubelet[788]: E1119 23:00:26.063148     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:36 embed-certs-044665 kubelet[788]: I1119 23:00:36.888673     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: I1119 23:00:37.101895     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: I1119 23:00:37.102833     788 scope.go:117] "RemoveContainer" containerID="78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: E1119 23:00:37.105066     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: I1119 23:00:37.132279     788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z42jm" podStartSLOduration=7.506387811 podStartE2EDuration="22.132263734s" podCreationTimestamp="2025-11-19 23:00:15 +0000 UTC" firstStartedPulling="2025-11-19 23:00:16.130792602 +0000 UTC m=+11.473373002" lastFinishedPulling="2025-11-19 23:00:30.756668517 +0000 UTC m=+26.099248925" observedRunningTime="2025-11-19 23:00:31.097930714 +0000 UTC m=+26.440511171" watchObservedRunningTime="2025-11-19 23:00:37.132263734 +0000 UTC m=+32.474844134"
	Nov 19 23:00:43 embed-certs-044665 kubelet[788]: I1119 23:00:43.123735     788 scope.go:117] "RemoveContainer" containerID="6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242"
	Nov 19 23:00:46 embed-certs-044665 kubelet[788]: I1119 23:00:46.063507     788 scope.go:117] "RemoveContainer" containerID="78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	Nov 19 23:00:46 embed-certs-044665 kubelet[788]: E1119 23:00:46.064190     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:56 embed-certs-044665 kubelet[788]: I1119 23:00:56.888128     788 scope.go:117] "RemoveContainer" containerID="78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	Nov 19 23:00:56 embed-certs-044665 kubelet[788]: E1119 23:00:56.888321     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:01:01 embed-certs-044665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 23:01:01 embed-certs-044665 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 23:01:01 embed-certs-044665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [da3c06bf1658f0eaaa170dc1d4b37e9a0e6138c18f8111f4802b6a78c62ac707] <==
	2025/11/19 23:00:30 Starting overwatch
	2025/11/19 23:00:30 Using namespace: kubernetes-dashboard
	2025/11/19 23:00:30 Using in-cluster config to connect to apiserver
	2025/11/19 23:00:30 Using secret token for csrf signing
	2025/11/19 23:00:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 23:00:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 23:00:30 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 23:00:30 Generating JWE encryption key
	2025/11/19 23:00:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 23:00:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 23:00:31 Initializing JWE encryption key from synchronized object
	2025/11/19 23:00:31 Creating in-cluster Sidecar client
	2025/11/19 23:00:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 23:00:31 Serving insecurely on HTTP port: 9090
	2025/11/19 23:01:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7] <==
	I1119 23:00:43.182499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 23:00:43.194262       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 23:00:43.194817       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 23:00:43.198792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:46.654530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:50.915479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:54.513950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:57.567503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:00.589994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:00.598359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:00.598596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 23:01:00.600064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-044665_776f180c-4fe3-47de-84e9-40ee82230b09!
	I1119 23:01:00.598656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5b69f9e-50f8-4cbb-93e0-ac4960fffe1d", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-044665_776f180c-4fe3-47de-84e9-40ee82230b09 became leader
	W1119 23:01:00.607320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:00.614277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:00.702080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-044665_776f180c-4fe3-47de-84e9-40ee82230b09!
	W1119 23:01:02.617840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:02.623016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:04.626190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:04.631579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242] <==
	I1119 23:00:12.580098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 23:00:42.583321       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-044665 -n embed-certs-044665
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-044665 -n embed-certs-044665: exit status 2 (360.704044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-044665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-044665
helpers_test.go:243: (dbg) docker inspect embed-certs-044665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be",
	        "Created": "2025-11-19T22:58:06.768832725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1073213,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:59:57.177459051Z",
	            "FinishedAt": "2025-11-19T22:59:56.149428303Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/hosts",
	        "LogPath": "/var/lib/docker/containers/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be-json.log",
	        "Name": "/embed-certs-044665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-044665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-044665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be",
	                "LowerDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ddceb9b716d8b5272e53c0e81e56ac34f6fc95f0aa2d4efebcb03213a97c8ae9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-044665",
	                "Source": "/var/lib/docker/volumes/embed-certs-044665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-044665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-044665",
	                "name.minikube.sigs.k8s.io": "embed-certs-044665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02dd480a06e594f3a87e5352ed6ea7a567c37ca4f9fce5590a05e5e4927a8521",
	            "SandboxKey": "/var/run/docker/netns/02dd480a06e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33875"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33874"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-044665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:20:8a:2e:d3:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "15bc9118c71b109d30f6317e5a328a97bacbdfe5f367a0001ea8dd4fc8a13fe9",
	                    "EndpointID": "1e183200f9d0779e6f73b36747e3c395d268dc781cc2befdfbad9eb6f3f148d8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-044665",
	                        "c2d8d721c15d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665: exit status 2 (374.655119ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-044665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-044665 logs -n 25: (1.326953484s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-018508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │                     │
	│ stop    │ -p no-preload-018508 --alsologtostderr -v=3                                                                                                                              │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:56 UTC │ 19 Nov 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ start   │ -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ image   │ old-k8s-version-191961 image list --format=json                                                                                                                          │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:57 UTC │
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                               │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                              │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ delete  │ -p no-preload-018508                                                                                                                                                     │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p no-preload-018508                                                                                                                                                     │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-553369                                                                                                                                          │ disable-driver-mounts-553369 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                             │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-841969 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ embed-certs-044665 image list --format=json                                                                                                                              │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p embed-certs-044665 --alsologtostderr -v=1                                                                                                                             │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:00:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:00:09.020742 1074858 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:00:09.020990 1074858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:00:09.021018 1074858 out.go:374] Setting ErrFile to fd 2...
	I1119 23:00:09.021039 1074858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:00:09.021355 1074858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:00:09.021798 1074858 out.go:368] Setting JSON to false
	I1119 23:00:09.022942 1074858 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16938,"bootTime":1763576271,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 23:00:09.023055 1074858 start.go:143] virtualization:  
	I1119 23:00:09.025982 1074858 out.go:179] * [default-k8s-diff-port-841969] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 23:00:09.029727 1074858 notify.go:221] Checking for updates...
	I1119 23:00:09.032801 1074858 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:00:09.036194 1074858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:00:09.039121 1074858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:00:09.042562 1074858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 23:00:09.045340 1074858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 23:00:09.048233 1074858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:00:09.051653 1074858 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:00:09.052294 1074858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:00:09.105938 1074858 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 23:00:09.106057 1074858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:00:09.220239 1074858 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 23:00:09.202141977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:00:09.220342 1074858 docker.go:319] overlay module found
	I1119 23:00:09.223812 1074858 out.go:179] * Using the docker driver based on existing profile
	I1119 23:00:09.226625 1074858 start.go:309] selected driver: docker
	I1119 23:00:09.226646 1074858 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:00:09.226753 1074858 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:00:09.227448 1074858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:00:09.343252 1074858 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 23:00:09.330502625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:00:09.343595 1074858 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:00:09.343621 1074858 cni.go:84] Creating CNI manager for ""
	I1119 23:00:09.343677 1074858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:00:09.343716 1074858 start.go:353] cluster config:
	{Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:00:09.347073 1074858 out.go:179] * Starting "default-k8s-diff-port-841969" primary control-plane node in "default-k8s-diff-port-841969" cluster
	I1119 23:00:09.349860 1074858 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 23:00:09.352747 1074858 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 23:00:09.355497 1074858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:00:09.355549 1074858 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 23:00:09.355569 1074858 cache.go:65] Caching tarball of preloaded images
	I1119 23:00:09.355659 1074858 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 23:00:09.355675 1074858 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:00:09.355803 1074858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 23:00:09.356013 1074858 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 23:00:09.387733 1074858 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 23:00:09.387759 1074858 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 23:00:09.387773 1074858 cache.go:243] Successfully downloaded all kic artifacts
	I1119 23:00:09.387796 1074858 start.go:360] acquireMachinesLock for default-k8s-diff-port-841969: {Name:mke5d323374b95cff07c96188997ebbdcf73748f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:00:09.387852 1074858 start.go:364] duration metric: took 35.889µs to acquireMachinesLock for "default-k8s-diff-port-841969"
	I1119 23:00:09.387877 1074858 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:00:09.387887 1074858 fix.go:54] fixHost starting: 
	I1119 23:00:09.388155 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:09.418056 1074858 fix.go:112] recreateIfNeeded on default-k8s-diff-port-841969: state=Stopped err=<nil>
	W1119 23:00:09.418086 1074858 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:00:12.753460 1073084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.822361931s)
	I1119 23:00:12.753520 1073084 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.712151143s)
	I1119 23:00:12.753557 1073084 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044665" to be "Ready" ...
	I1119 23:00:12.753885 1073084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.708502908s)
	I1119 23:00:12.754149 1073084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.283715793s)
	I1119 23:00:12.757939 1073084 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-044665 addons enable metrics-server
	
	I1119 23:00:12.787718 1073084 node_ready.go:49] node "embed-certs-044665" is "Ready"
	I1119 23:00:12.787803 1073084 node_ready.go:38] duration metric: took 34.223105ms for node "embed-certs-044665" to be "Ready" ...
	I1119 23:00:12.787835 1073084 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:00:12.787974 1073084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:00:12.803596 1073084 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 23:00:09.421229 1074858 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-841969" ...
	I1119 23:00:09.421311 1074858 cli_runner.go:164] Run: docker start default-k8s-diff-port-841969
	I1119 23:00:09.807885 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:09.840733 1074858 kic.go:430] container "default-k8s-diff-port-841969" state is running.
	I1119 23:00:09.841105 1074858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 23:00:09.870674 1074858 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/config.json ...
	I1119 23:00:09.870945 1074858 machine.go:94] provisionDockerMachine start ...
	I1119 23:00:09.871025 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:09.899933 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:09.900267 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:09.900278 1074858 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:00:09.903104 1074858 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 23:00:13.070921 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 23:00:13.070946 1074858 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-841969"
	I1119 23:00:13.071045 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.095014 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:13.095329 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:13.095345 1074858 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-841969 && echo "default-k8s-diff-port-841969" | sudo tee /etc/hostname
	I1119 23:00:13.255346 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-841969
	
	I1119 23:00:13.255453 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.289266 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:13.289576 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:13.289603 1074858 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-841969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-841969/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-841969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:00:13.451854 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:00:13.451958 1074858 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 23:00:13.451995 1074858 ubuntu.go:190] setting up certificates
	I1119 23:00:13.452032 1074858 provision.go:84] configureAuth start
	I1119 23:00:13.452128 1074858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 23:00:13.478196 1074858 provision.go:143] copyHostCerts
	I1119 23:00:13.478271 1074858 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 23:00:13.478286 1074858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 23:00:13.478364 1074858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 23:00:13.478467 1074858 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 23:00:13.478473 1074858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 23:00:13.478499 1074858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 23:00:13.478560 1074858 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 23:00:13.478564 1074858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 23:00:13.478588 1074858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 23:00:13.478643 1074858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-841969 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-841969 localhost minikube]
	I1119 23:00:13.787877 1074858 provision.go:177] copyRemoteCerts
	I1119 23:00:13.787949 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:00:13.788001 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.806423 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:13.908876 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:00:13.929629 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 23:00:13.949398 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:00:13.969582 1074858 provision.go:87] duration metric: took 517.509588ms to configureAuth
	I1119 23:00:13.969608 1074858 ubuntu.go:206] setting minikube options for container-runtime
	I1119 23:00:13.969808 1074858 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:00:13.969912 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:13.987183 1074858 main.go:143] libmachine: Using SSH client type: native
	I1119 23:00:13.987492 1074858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33876 <nil> <nil>}
	I1119 23:00:13.987513 1074858 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:00:14.344022 1074858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:00:14.344042 1074858 machine.go:97] duration metric: took 4.473085435s to provisionDockerMachine
	I1119 23:00:14.344054 1074858 start.go:293] postStartSetup for "default-k8s-diff-port-841969" (driver="docker")
	I1119 23:00:14.344081 1074858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:00:14.344146 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:00:14.344193 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.368369 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.479106 1074858 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:00:14.482629 1074858 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 23:00:14.482661 1074858 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 23:00:14.482673 1074858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 23:00:14.482730 1074858 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 23:00:14.482820 1074858 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 23:00:14.482978 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:00:14.491476 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:00:14.512213 1074858 start.go:296] duration metric: took 168.143867ms for postStartSetup
	I1119 23:00:14.512297 1074858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:00:14.512336 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.533479 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.648894 1074858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 23:00:14.654378 1074858 fix.go:56] duration metric: took 5.266483292s for fixHost
	I1119 23:00:14.654401 1074858 start.go:83] releasing machines lock for "default-k8s-diff-port-841969", held for 5.266535361s
	I1119 23:00:14.654485 1074858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-841969
	I1119 23:00:14.671918 1074858 ssh_runner.go:195] Run: cat /version.json
	I1119 23:00:14.671952 1074858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:00:14.671968 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.672012 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:14.697526 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.703863 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:14.802593 1074858 ssh_runner.go:195] Run: systemctl --version
	I1119 23:00:14.930958 1074858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:00:14.972376 1074858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:00:14.976727 1074858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:00:14.976823 1074858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:00:14.984618 1074858 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 23:00:14.984644 1074858 start.go:496] detecting cgroup driver to use...
	I1119 23:00:14.984675 1074858 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 23:00:14.984738 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:00:15.001068 1074858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:00:15.027723 1074858 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:00:15.027795 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:00:15.046520 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:00:15.061871 1074858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:00:15.239156 1074858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:00:15.412023 1074858 docker.go:234] disabling docker service ...
	I1119 23:00:15.412106 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:00:15.428499 1074858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:00:15.444141 1074858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:00:15.609325 1074858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:00:15.759499 1074858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:00:15.772961 1074858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:00:15.789145 1074858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:00:15.789287 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.798719 1074858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:00:15.798848 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.808556 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.818235 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.827589 1074858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:00:15.836394 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.846292 1074858 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.855513 1074858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:00:15.864558 1074858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:00:15.872270 1074858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:00:15.880152 1074858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:00:16.012266 1074858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:00:16.228199 1074858 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:00:16.228295 1074858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:00:16.234085 1074858 start.go:564] Will wait 60s for crictl version
	I1119 23:00:16.234174 1074858 ssh_runner.go:195] Run: which crictl
	I1119 23:00:16.237860 1074858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 23:00:16.268543 1074858 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 23:00:16.268696 1074858 ssh_runner.go:195] Run: crio --version
	I1119 23:00:16.300260 1074858 ssh_runner.go:195] Run: crio --version
	I1119 23:00:16.341813 1074858 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 23:00:12.806716 1073084 addons.go:515] duration metric: took 7.197426313s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 23:00:12.808256 1073084 api_server.go:72] duration metric: took 7.199534908s to wait for apiserver process to appear ...
	I1119 23:00:12.808319 1073084 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:00:12.808354 1073084 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:00:12.822931 1073084 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:00:12.822964 1073084 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:00:13.308463 1073084 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:00:13.323910 1073084 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 23:00:13.326095 1073084 api_server.go:141] control plane version: v1.34.1
	I1119 23:00:13.326121 1073084 api_server.go:131] duration metric: took 517.781335ms to wait for apiserver health ...
	I1119 23:00:13.326130 1073084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:00:13.332538 1073084 system_pods.go:59] 8 kube-system pods found
	I1119 23:00:13.332574 1073084 system_pods.go:61] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:13.332585 1073084 system_pods.go:61] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:13.332592 1073084 system_pods.go:61] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 23:00:13.332599 1073084 system_pods.go:61] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:00:13.332607 1073084 system_pods.go:61] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:13.332612 1073084 system_pods.go:61] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 23:00:13.332619 1073084 system_pods.go:61] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:13.332623 1073084 system_pods.go:61] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Running
	I1119 23:00:13.332629 1073084 system_pods.go:74] duration metric: took 6.494691ms to wait for pod list to return data ...
	I1119 23:00:13.332638 1073084 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:00:13.344908 1073084 default_sa.go:45] found service account: "default"
	I1119 23:00:13.344933 1073084 default_sa.go:55] duration metric: took 12.288519ms for default service account to be created ...
	I1119 23:00:13.344946 1073084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:00:13.351382 1073084 system_pods.go:86] 8 kube-system pods found
	I1119 23:00:13.351469 1073084 system_pods.go:89] "coredns-66bc5c9577-kcs7v" [fd801ea5-7011-49a7-be54-65189f230b9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:13.351496 1073084 system_pods.go:89] "etcd-embed-certs-044665" [1f305620-918e-4fc8-bbcc-7cf5bf58546a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:13.351540 1073084 system_pods.go:89] "kindnet-bphl7" [d19c80b2-4ab0-4850-8ffa-65b62e4121f6] Running
	I1119 23:00:13.351589 1073084 system_pods.go:89] "kube-apiserver-embed-certs-044665" [5f9fc0e0-ca07-4df7-b3b4-c766cfc2a5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:00:13.351671 1073084 system_pods.go:89] "kube-controller-manager-embed-certs-044665" [5e59578f-53d4-472d-ba4f-9318b85f9f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:13.351699 1073084 system_pods.go:89] "kube-proxy-w5t4l" [aaa92ce4-cadd-40ec-aa55-4a007a59e54b] Running
	I1119 23:00:13.351753 1073084 system_pods.go:89] "kube-scheduler-embed-certs-044665" [0b4c7fd5-6ef5-4fab-b92e-c645b120f537] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:13.351782 1073084 system_pods.go:89] "storage-provisioner" [0275c0e8-3fa6-4ce0-9cb7-11007aabd1d3] Running
	I1119 23:00:13.351806 1073084 system_pods.go:126] duration metric: took 6.849902ms to wait for k8s-apps to be running ...
	I1119 23:00:13.351849 1073084 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:00:13.351940 1073084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:00:13.368059 1073084 system_svc.go:56] duration metric: took 16.200854ms WaitForService to wait for kubelet
	I1119 23:00:13.368139 1073084 kubeadm.go:587] duration metric: took 7.759418617s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:00:13.368193 1073084 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:00:13.377165 1073084 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 23:00:13.377250 1073084 node_conditions.go:123] node cpu capacity is 2
	I1119 23:00:13.377279 1073084 node_conditions.go:105] duration metric: took 9.067249ms to run NodePressure ...
	I1119 23:00:13.377320 1073084 start.go:242] waiting for startup goroutines ...
	I1119 23:00:13.377348 1073084 start.go:247] waiting for cluster config update ...
	I1119 23:00:13.377376 1073084 start.go:256] writing updated cluster config ...
	I1119 23:00:13.377745 1073084 ssh_runner.go:195] Run: rm -f paused
	I1119 23:00:13.385971 1073084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:00:13.393072 1073084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 23:00:15.404828 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:16.345019 1074858 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-841969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:00:16.363136 1074858 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 23:00:16.371935 1074858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:00:16.386285 1074858 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:00:16.386402 1074858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:00:16.386461 1074858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:00:16.435078 1074858 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:00:16.435104 1074858 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:00:16.435165 1074858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:00:16.472086 1074858 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:00:16.472109 1074858 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:00:16.472117 1074858 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 23:00:16.472222 1074858 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-841969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:00:16.472314 1074858 ssh_runner.go:195] Run: crio config
	I1119 23:00:16.542388 1074858 cni.go:84] Creating CNI manager for ""
	I1119 23:00:16.542414 1074858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:00:16.542437 1074858 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:00:16.542462 1074858 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-841969 NodeName:default-k8s-diff-port-841969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:00:16.542636 1074858 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-841969"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:00:16.542724 1074858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:00:16.552045 1074858 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:00:16.552148 1074858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 23:00:16.561913 1074858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 23:00:16.579277 1074858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:00:16.601691 1074858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 23:00:16.617785 1074858 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 23:00:16.622496 1074858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:00:16.635990 1074858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:00:16.840441 1074858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:00:16.857266 1074858 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969 for IP: 192.168.85.2
	I1119 23:00:16.857337 1074858 certs.go:195] generating shared ca certs ...
	I1119 23:00:16.857368 1074858 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:16.857540 1074858 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 23:00:16.857629 1074858 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 23:00:16.857667 1074858 certs.go:257] generating profile certs ...
	I1119 23:00:16.857830 1074858 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.key
	I1119 23:00:16.857934 1074858 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key.02fb524d
	I1119 23:00:16.858033 1074858 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key
	I1119 23:00:16.858205 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 23:00:16.858274 1074858 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 23:00:16.858313 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 23:00:16.858366 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:00:16.858427 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:00:16.858504 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 23:00:16.858596 1074858 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:00:16.859475 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:00:16.901627 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:00:16.959021 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:00:17.004833 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 23:00:17.071262 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:00:17.140787 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 23:00:17.184207 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:00:17.235348 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 23:00:17.290628 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 23:00:17.327432 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 23:00:17.357285 1074858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:00:17.400492 1074858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:00:17.417658 1074858 ssh_runner.go:195] Run: openssl version
	I1119 23:00:17.429328 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 23:00:17.441251 1074858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 23:00:17.445664 1074858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 23:00:17.445728 1074858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 23:00:17.498209 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 23:00:17.508557 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 23:00:17.521006 1074858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 23:00:17.528758 1074858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 23:00:17.528904 1074858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 23:00:17.583832 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:00:17.595731 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:00:17.610222 1074858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:00:17.615218 1074858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:00:17.615286 1074858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:00:17.664533 1074858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:00:17.683725 1074858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:00:17.701809 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:00:17.792528 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:00:17.906788 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:00:18.021833 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:00:18.236337 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:00:18.369660 1074858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:00:18.463099 1074858 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-841969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-841969 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:00:18.463253 1074858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:00:18.463361 1074858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:00:18.528443 1074858 cri.go:89] found id: "52bfb6272ad18315a205a597275a2908c50905792855fa02c474eb334dde7033"
	I1119 23:00:18.528520 1074858 cri.go:89] found id: "0f65aa748a61e533d7004d796a63a2ca937a30f669219b777e78d681df3e741a"
	I1119 23:00:18.528550 1074858 cri.go:89] found id: "868c86f80fac77993d4e41965587d02dd422bfe189be2a35461673dd2cfa1aef"
	I1119 23:00:18.528572 1074858 cri.go:89] found id: "ac53ca3801483eeadc872ef523a919c2a27248a87fcca348b3677b815e5cdc99"
	I1119 23:00:18.528610 1074858 cri.go:89] found id: ""
	I1119 23:00:18.528691 1074858 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 23:00:18.551036 1074858 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:00:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:00:18.551165 1074858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:00:18.573009 1074858 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:00:18.573082 1074858 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:00:18.573164 1074858 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:00:18.593692 1074858 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:00:18.594649 1074858 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-841969" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:00:18.595360 1074858 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-860325/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-841969" cluster setting kubeconfig missing "default-k8s-diff-port-841969" context setting]
	I1119 23:00:18.596373 1074858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:18.610979 1074858 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:00:18.642442 1074858 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 23:00:18.642541 1074858 kubeadm.go:602] duration metric: took 69.423508ms to restartPrimaryControlPlane
	I1119 23:00:18.642570 1074858 kubeadm.go:403] duration metric: took 179.493952ms to StartCluster
	I1119 23:00:18.642618 1074858 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:18.642722 1074858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:00:18.644579 1074858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:00:18.644907 1074858 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:00:18.645289 1074858 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:00:18.645364 1074858 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-841969"
	I1119 23:00:18.645378 1074858 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-841969"
	W1119 23:00:18.645384 1074858 addons.go:248] addon storage-provisioner should already be in state true
	I1119 23:00:18.645407 1074858 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:00:18.645921 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.646371 1074858 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:00:18.646469 1074858 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-841969"
	I1119 23:00:18.646513 1074858 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-841969"
	I1119 23:00:18.646844 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.647114 1074858 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-841969"
	I1119 23:00:18.647151 1074858 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-841969"
	W1119 23:00:18.647190 1074858 addons.go:248] addon dashboard should already be in state true
	I1119 23:00:18.647241 1074858 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:00:18.648035 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.655409 1074858 out.go:179] * Verifying Kubernetes components...
	I1119 23:00:18.662202 1074858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:00:18.698572 1074858 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:00:18.701619 1074858 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:00:18.701641 1074858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:00:18.701716 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:18.716878 1074858 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-841969"
	W1119 23:00:18.716900 1074858 addons.go:248] addon default-storageclass should already be in state true
	I1119 23:00:18.716925 1074858 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:00:18.717347 1074858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:00:18.730978 1074858 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 23:00:18.734981 1074858 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 23:00:18.738294 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 23:00:18.738320 1074858 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 23:00:18.738401 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:18.740892 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:18.765061 1074858 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:00:18.765085 1074858 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:00:18.765152 1074858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:00:18.780394 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:00:18.793892 1074858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	W1119 23:00:17.898107 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:20.398274 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:19.084636 1074858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:00:19.155398 1074858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:00:19.176391 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 23:00:19.176413 1074858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 23:00:19.233624 1074858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:00:19.295150 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 23:00:19.295221 1074858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 23:00:19.383339 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 23:00:19.383421 1074858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 23:00:19.495956 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 23:00:19.496018 1074858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 23:00:19.598302 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 23:00:19.598375 1074858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 23:00:19.664625 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 23:00:19.664654 1074858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 23:00:19.692345 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 23:00:19.692369 1074858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 23:00:19.714664 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 23:00:19.714686 1074858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 23:00:19.739215 1074858 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 23:00:19.739241 1074858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 23:00:19.770733 1074858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1119 23:00:22.401444 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:24.402728 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:26.905138 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:28.726300 1074858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.641628971s)
	I1119 23:00:28.726366 1074858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.570949222s)
	I1119 23:00:28.726664 1074858 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.493015204s)
	I1119 23:00:28.726694 1074858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-841969" to be "Ready" ...
	I1119 23:00:28.727016 1074858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.956236081s)
	I1119 23:00:28.730420 1074858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-841969 addons enable metrics-server
	
	I1119 23:00:28.777741 1074858 node_ready.go:49] node "default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:28.777766 1074858 node_ready.go:38] duration metric: took 51.050642ms for node "default-k8s-diff-port-841969" to be "Ready" ...
	I1119 23:00:28.777779 1074858 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:00:28.777839 1074858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:00:28.805985 1074858 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 23:00:28.808903 1074858 addons.go:515] duration metric: took 10.163593469s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 23:00:28.825119 1074858 api_server.go:72] duration metric: took 10.180127889s to wait for apiserver process to appear ...
	I1119 23:00:28.825147 1074858 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:00:28.825166 1074858 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1119 23:00:28.853722 1074858 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1119 23:00:28.857096 1074858 api_server.go:141] control plane version: v1.34.1
	I1119 23:00:28.857128 1074858 api_server.go:131] duration metric: took 31.973161ms to wait for apiserver health ...
	I1119 23:00:28.857137 1074858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:00:28.870221 1074858 system_pods.go:59] 8 kube-system pods found
	I1119 23:00:28.870263 1074858 system_pods.go:61] "coredns-66bc5c9577-zkjxn" [1c4a619c-0219-4f38-897a-d3989d4d3ed9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:28.870274 1074858 system_pods.go:61] "etcd-default-k8s-diff-port-841969" [3c8643e2-2f77-48bb-86fd-832d737dc91d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:28.870290 1074858 system_pods.go:61] "kindnet-65cjg" [c756094a-14e9-41ea-b7a7-0af539154203] Running
	I1119 23:00:28.870299 1074858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-841969" [54a2fc1c-cce5-430e-9ffa-c1ef86387118] Running
	I1119 23:00:28.870307 1074858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-841969" [9cceb253-eba3-4c8e-8d84-acbd3924c0f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:28.870318 1074858 system_pods.go:61] "kube-proxy-fbmdp" [ef28c6ce-40e6-411e-b7e0-1a6b5914c710] Running
	I1119 23:00:28.870327 1074858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-841969" [0dea9f40-de0a-460e-862e-98beaf3e8971] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:28.870332 1074858 system_pods.go:61] "storage-provisioner" [c79703f3-5114-46df-8d46-987b4a56f647] Running
	I1119 23:00:28.870343 1074858 system_pods.go:74] duration metric: took 13.199493ms to wait for pod list to return data ...
	I1119 23:00:28.870351 1074858 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:00:28.883364 1074858 default_sa.go:45] found service account: "default"
	I1119 23:00:28.883394 1074858 default_sa.go:55] duration metric: took 13.026724ms for default service account to be created ...
	I1119 23:00:28.883415 1074858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:00:28.969486 1074858 system_pods.go:86] 8 kube-system pods found
	I1119 23:00:28.969530 1074858 system_pods.go:89] "coredns-66bc5c9577-zkjxn" [1c4a619c-0219-4f38-897a-d3989d4d3ed9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:00:28.969541 1074858 system_pods.go:89] "etcd-default-k8s-diff-port-841969" [3c8643e2-2f77-48bb-86fd-832d737dc91d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:00:28.969547 1074858 system_pods.go:89] "kindnet-65cjg" [c756094a-14e9-41ea-b7a7-0af539154203] Running
	I1119 23:00:28.969554 1074858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-841969" [54a2fc1c-cce5-430e-9ffa-c1ef86387118] Running
	I1119 23:00:28.969563 1074858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-841969" [9cceb253-eba3-4c8e-8d84-acbd3924c0f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:00:28.969580 1074858 system_pods.go:89] "kube-proxy-fbmdp" [ef28c6ce-40e6-411e-b7e0-1a6b5914c710] Running
	I1119 23:00:28.969597 1074858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-841969" [0dea9f40-de0a-460e-862e-98beaf3e8971] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:00:28.969607 1074858 system_pods.go:89] "storage-provisioner" [c79703f3-5114-46df-8d46-987b4a56f647] Running
	I1119 23:00:28.969614 1074858 system_pods.go:126] duration metric: took 86.193535ms to wait for k8s-apps to be running ...
	I1119 23:00:28.969626 1074858 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:00:28.969693 1074858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:00:29.008347 1074858 system_svc.go:56] duration metric: took 38.708166ms WaitForService to wait for kubelet
	I1119 23:00:29.008380 1074858 kubeadm.go:587] duration metric: took 10.363408456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:00:29.008410 1074858 node_conditions.go:102] verifying NodePressure condition ...
	W1119 23:00:29.400959 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:31.412284 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:29.082849 1074858 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 23:00:29.082900 1074858 node_conditions.go:123] node cpu capacity is 2
	I1119 23:00:29.082913 1074858 node_conditions.go:105] duration metric: took 74.496947ms to run NodePressure ...
	I1119 23:00:29.082924 1074858 start.go:242] waiting for startup goroutines ...
	I1119 23:00:29.082931 1074858 start.go:247] waiting for cluster config update ...
	I1119 23:00:29.082942 1074858 start.go:256] writing updated cluster config ...
	I1119 23:00:29.083262 1074858 ssh_runner.go:195] Run: rm -f paused
	I1119 23:00:29.090358 1074858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:00:29.126086 1074858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkjxn" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 23:00:31.133600 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:33.633087 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:33.903554 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:36.400815 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:36.133948 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:38.632706 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:38.401713 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:40.899717 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:41.131390 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:43.132968 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:43.404573 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	W1119 23:00:45.899759 1073084 pod_ready.go:104] pod "coredns-66bc5c9577-kcs7v" is not "Ready", error: <nil>
	I1119 23:00:47.399988 1073084 pod_ready.go:94] pod "coredns-66bc5c9577-kcs7v" is "Ready"
	I1119 23:00:47.400013 1073084 pod_ready.go:86] duration metric: took 34.006868176s for pod "coredns-66bc5c9577-kcs7v" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.403216 1073084 pod_ready.go:83] waiting for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.408596 1073084 pod_ready.go:94] pod "etcd-embed-certs-044665" is "Ready"
	I1119 23:00:47.408625 1073084 pod_ready.go:86] duration metric: took 5.386359ms for pod "etcd-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.411131 1073084 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.415883 1073084 pod_ready.go:94] pod "kube-apiserver-embed-certs-044665" is "Ready"
	I1119 23:00:47.415910 1073084 pod_ready.go:86] duration metric: took 4.751497ms for pod "kube-apiserver-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.418800 1073084 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.597717 1073084 pod_ready.go:94] pod "kube-controller-manager-embed-certs-044665" is "Ready"
	I1119 23:00:47.597789 1073084 pod_ready.go:86] duration metric: took 178.962719ms for pod "kube-controller-manager-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:47.798098 1073084 pod_ready.go:83] waiting for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.198015 1073084 pod_ready.go:94] pod "kube-proxy-w5t4l" is "Ready"
	I1119 23:00:48.198045 1073084 pod_ready.go:86] duration metric: took 399.918409ms for pod "kube-proxy-w5t4l" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.397510 1073084 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.797787 1073084 pod_ready.go:94] pod "kube-scheduler-embed-certs-044665" is "Ready"
	I1119 23:00:48.797813 1073084 pod_ready.go:86] duration metric: took 400.226892ms for pod "kube-scheduler-embed-certs-044665" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:48.797830 1073084 pod_ready.go:40] duration metric: took 35.411771544s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:00:48.855248 1073084 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 23:00:48.860175 1073084 out.go:179] * Done! kubectl is now configured to use "embed-certs-044665" cluster and "default" namespace by default
	W1119 23:00:45.135504 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:47.631201 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:49.631507 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:52.132022 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:54.632097 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:56.632198 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	W1119 23:00:58.633262 1074858 pod_ready.go:104] pod "coredns-66bc5c9577-zkjxn" is not "Ready", error: <nil>
	I1119 23:00:59.137630 1074858 pod_ready.go:94] pod "coredns-66bc5c9577-zkjxn" is "Ready"
	I1119 23:00:59.137665 1074858 pod_ready.go:86] duration metric: took 30.011541039s for pod "coredns-66bc5c9577-zkjxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.151934 1074858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.162737 1074858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:59.162766 1074858 pod_ready.go:86] duration metric: took 10.800623ms for pod "etcd-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.246048 1074858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.251028 1074858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:59.251056 1074858 pod_ready.go:86] duration metric: took 4.977288ms for pod "kube-apiserver-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.253452 1074858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.330766 1074858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-841969" is "Ready"
	I1119 23:00:59.330794 1074858 pod_ready.go:86] duration metric: took 77.31636ms for pod "kube-controller-manager-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.531122 1074858 pod_ready.go:83] waiting for pod "kube-proxy-fbmdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:00:59.930535 1074858 pod_ready.go:94] pod "kube-proxy-fbmdp" is "Ready"
	I1119 23:00:59.930564 1074858 pod_ready.go:86] duration metric: took 399.411198ms for pod "kube-proxy-fbmdp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:01:00.133432 1074858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:01:00.530094 1074858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-841969" is "Ready"
	I1119 23:01:00.530122 1074858 pod_ready.go:86] duration metric: took 396.662196ms for pod "kube-scheduler-default-k8s-diff-port-841969" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:01:00.530136 1074858 pod_ready.go:40] duration metric: took 31.439732783s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:01:00.651048 1074858 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 23:01:00.656133 1074858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-841969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.14098231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.142351527Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bd0edc3f12faacfde6cc11ada04de3264a1f56d1d40d54ae0e7cbf5c7d55afa5/merged/etc/passwd: no such file or directory"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.142481719Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bd0edc3f12faacfde6cc11ada04de3264a1f56d1d40d54ae0e7cbf5c7d55afa5/merged/etc/group: no such file or directory"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.142939019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.161886548Z" level=info msg="Created container 046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7: kube-system/storage-provisioner/storage-provisioner" id=d14bd0a3-cdf1-48dd-be40-640d80bf04ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.16305496Z" level=info msg="Starting container: 046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7" id=300b9b77-6338-4ab0-93fc-c07865be8baf name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:00:43 embed-certs-044665 crio[660]: time="2025-11-19T23:00:43.168161989Z" level=info msg="Started container" PID=1688 containerID=046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7 description=kube-system/storage-provisioner/storage-provisioner id=300b9b77-6338-4ab0-93fc-c07865be8baf name=/runtime.v1.RuntimeService/StartContainer sandboxID=a47836b2b4581676528ec27444e2a42c3870945312935260c03fafdc8447388c
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.824163562Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.834215461Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.834251096Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.83427718Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.838706025Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.838747174Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.838772545Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.842969535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.843005646Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.843029524Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.847189361Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.847226908Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.847251524Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.851545631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.851582284Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.851607621Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.856251582Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:00:52 embed-certs-044665 crio[660]: time="2025-11-19T23:00:52.856293043Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	046253f691f1c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   a47836b2b4581       storage-provisioner                          kube-system
	78b59efcdc208       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago       Exited              dashboard-metrics-scraper   2                   7235765cef024       dashboard-metrics-scraper-6ffb444bf9-ppg2g   kubernetes-dashboard
	da3c06bf1658f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   a7d8051a5f46b       kubernetes-dashboard-855c9754f9-z42jm        kubernetes-dashboard
	b694fb8148d95       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   f87c88389694d       coredns-66bc5c9577-kcs7v                     kube-system
	976bca26a5d76       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   5cf4132d4a2da       busybox                                      default
	6990f841c7b94       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   a47836b2b4581       storage-provisioner                          kube-system
	9295ca087f37c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   bce461d12015d       kube-proxy-w5t4l                             kube-system
	27d9d65afc659       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   380fdc90f0545       kindnet-bphl7                                kube-system
	3e4714d2eeb4d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   146913ca75e15       kube-apiserver-embed-certs-044665            kube-system
	238c5d17777e8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8d839835b1ffb       kube-controller-manager-embed-certs-044665   kube-system
	b5283b9195f23       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   998b3c5782e90       etcd-embed-certs-044665                      kube-system
	a525567ed9ba5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   55cf28c262d22       kube-scheduler-embed-certs-044665            kube-system
	
	
	==> coredns [b694fb8148d95bc7d6e5da0a9295bccb19e7dfad4fe732fbd9a131704b9a740e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50611 - 51893 "HINFO IN 280384622609538922.4130317657295906375. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006045161s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-044665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-044665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-044665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_58_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:58:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-044665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:58:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:00:42 +0000   Wed, 19 Nov 2025 22:59:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-044665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                f8def6c5-4626-4320-af5a-5122b8c6bdf4
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-kcs7v                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-044665                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-bphl7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-044665             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-044665    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-w5t4l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-044665             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ppg2g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z42jm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m36s)  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m36s)  kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m36s)  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-044665 event: Registered Node embed-certs-044665 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-044665 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node embed-certs-044665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node embed-certs-044665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node embed-certs-044665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-044665 event: Registered Node embed-certs-044665 in Controller
	
	
	==> dmesg <==
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	[Nov19 23:00] overlayfs: idmapped layers are currently not supported
	[ +12.401560] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b5283b9195f2300b0f3bf25c9e7583045f11dde81a5fd0910a5da6bb40682d25] <==
	{"level":"warn","ts":"2025-11-19T23:00:08.994724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.045765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.083274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.146005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.250497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.266839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.320751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.389927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.424542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.489852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.529972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.547629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.608118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.637250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.667685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.752192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.769878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.790460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.892721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:09.956735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.023463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.079893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.135922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.172370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:10.381820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38382","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:07 up  4:43,  0 user,  load average: 2.74, 3.05, 2.58
	Linux embed-certs-044665 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [27d9d65afc659c5d60c9be89510e7bf466b92560a75a4d3e7dc76e48b5f8a603] <==
	I1119 23:00:12.555709       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 23:00:12.620594       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 23:00:12.620756       1 main.go:148] setting mtu 1500 for CNI 
	I1119 23:00:12.620769       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 23:00:12.620785       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T23:00:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 23:00:12.821948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 23:00:12.822042       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 23:00:12.822078       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 23:00:12.823426       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 23:00:42.823062       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 23:00:42.823187       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 23:00:42.823294       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 23:00:42.823351       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1119 23:00:44.123108       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 23:00:44.123171       1 metrics.go:72] Registering metrics
	I1119 23:00:44.123256       1 controller.go:711] "Syncing nftables rules"
	I1119 23:00:52.822947       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 23:00:52.823149       1 main.go:301] handling current node
	I1119 23:01:02.829128       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 23:01:02.829163       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e4714d2eeb4d37d4b62db18654ed28db444c97903232f09cd78f3e6313a061d] <==
	I1119 23:00:11.466612       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 23:00:11.479248       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 23:00:11.480133       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 23:00:11.480172       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 23:00:11.483161       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:00:11.483242       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 23:00:11.503478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:00:11.508792       1 aggregator.go:171] initial CRD sync complete...
	I1119 23:00:11.509526       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 23:00:11.509545       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 23:00:11.509565       1 cache.go:39] Caches are synced for autoregister controller
	I1119 23:00:11.513478       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:00:11.534492       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 23:00:11.548576       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 23:00:11.824471       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 23:00:12.167672       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:00:12.252287       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 23:00:12.343130       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:00:12.410850       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:00:12.438246       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:00:12.594858       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.221.53"}
	I1119 23:00:12.635620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.43.125"}
	I1119 23:00:14.918774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:00:15.215730       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 23:00:15.266780       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [238c5d17777e8d2d8923962dd5ffbc314c8b532c9ab3fc173611b6116da486cc] <==
	I1119 23:00:14.637201       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:00:14.641525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:00:14.644697       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:14.649067       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 23:00:14.651228       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 23:00:14.660470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 23:00:14.661665       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 23:00:14.661786       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 23:00:14.663693       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 23:00:14.663744       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 23:00:14.664739       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 23:00:14.667881       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 23:00:14.668058       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 23:00:14.671396       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:14.679713       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 23:00:14.684952       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 23:00:14.690297       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:00:14.690412       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:00:14.690507       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-044665"
	I1119 23:00:14.690566       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 23:00:14.699382       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:00:14.709556       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:00:14.709805       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:00:14.709847       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:00:15.247481       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [9295ca087f37c40e52f5e9c205c549117945e572b801bd5688abb137d331094c] <==
	I1119 23:00:12.816123       1 server_linux.go:53] "Using iptables proxy"
	I1119 23:00:12.908774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:00:13.023246       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:00:13.023369       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 23:00:13.023494       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:00:13.058428       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 23:00:13.058490       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:00:13.062249       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:00:13.062554       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:00:13.062583       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:13.064059       1 config.go:200] "Starting service config controller"
	I1119 23:00:13.064082       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:00:13.064098       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:00:13.064102       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:00:13.064113       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:00:13.064117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:00:13.064763       1 config.go:309] "Starting node config controller"
	I1119 23:00:13.064780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:00:13.064787       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:00:13.164467       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:00:13.164570       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:00:13.164596       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a525567ed9ba51753bb6c93d24d09f8268b5b85b4ad3edd8a3d369d63acfb629] <==
	I1119 23:00:08.243238       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:00:11.416714       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 23:00:11.416836       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 23:00:11.416871       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:00:11.416901       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:00:11.537186       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:00:11.537211       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:11.539841       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:00:11.539937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:11.539956       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:11.539981       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:00:11.640301       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:00:15 embed-certs-044665 kubelet[788]: I1119 23:00:15.340132     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d5c12c99-8f33-4930-9791-95d621818711-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ppg2g\" (UID: \"d5c12c99-8f33-4930-9791-95d621818711\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g"
	Nov 19 23:00:16 embed-certs-044665 kubelet[788]: W1119 23:00:16.103846     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/crio-7235765cef024f9c7ede4107f8a0e162d201989b2cd2584b73eba708b6e2b20e WatchSource:0}: Error finding container 7235765cef024f9c7ede4107f8a0e162d201989b2cd2584b73eba708b6e2b20e: Status 404 returned error can't find the container with id 7235765cef024f9c7ede4107f8a0e162d201989b2cd2584b73eba708b6e2b20e
	Nov 19 23:00:16 embed-certs-044665 kubelet[788]: W1119 23:00:16.127389     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c2d8d721c15dad0757d84052b9b1c10917aec35a1c1c412d8dd53a6a6b2ce2be/crio-a7d8051a5f46b9810a573b692420765b300bc4a20e97ce254ae249b0d4ee219f WatchSource:0}: Error finding container a7d8051a5f46b9810a573b692420765b300bc4a20e97ce254ae249b0d4ee219f: Status 404 returned error can't find the container with id a7d8051a5f46b9810a573b692420765b300bc4a20e97ce254ae249b0d4ee219f
	Nov 19 23:00:17 embed-certs-044665 kubelet[788]: I1119 23:00:17.091373     788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 23:00:23 embed-certs-044665 kubelet[788]: I1119 23:00:23.029103     788 scope.go:117] "RemoveContainer" containerID="80195445b6a46c5c09d6355ce9bba07b84afe900636890d2ed7cbcd675639542"
	Nov 19 23:00:24 embed-certs-044665 kubelet[788]: I1119 23:00:24.040442     788 scope.go:117] "RemoveContainer" containerID="80195445b6a46c5c09d6355ce9bba07b84afe900636890d2ed7cbcd675639542"
	Nov 19 23:00:24 embed-certs-044665 kubelet[788]: I1119 23:00:24.040829     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:24 embed-certs-044665 kubelet[788]: E1119 23:00:24.041015     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:25 embed-certs-044665 kubelet[788]: I1119 23:00:25.052638     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:25 embed-certs-044665 kubelet[788]: E1119 23:00:25.052835     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:26 embed-certs-044665 kubelet[788]: I1119 23:00:26.062911     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:26 embed-certs-044665 kubelet[788]: E1119 23:00:26.063148     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:36 embed-certs-044665 kubelet[788]: I1119 23:00:36.888673     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: I1119 23:00:37.101895     788 scope.go:117] "RemoveContainer" containerID="81bc315b260db1851b26da076a1b3c59c597d1d722df8e2c38b3fca9c7c2bd0e"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: I1119 23:00:37.102833     788 scope.go:117] "RemoveContainer" containerID="78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: E1119 23:00:37.105066     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:37 embed-certs-044665 kubelet[788]: I1119 23:00:37.132279     788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z42jm" podStartSLOduration=7.506387811 podStartE2EDuration="22.132263734s" podCreationTimestamp="2025-11-19 23:00:15 +0000 UTC" firstStartedPulling="2025-11-19 23:00:16.130792602 +0000 UTC m=+11.473373002" lastFinishedPulling="2025-11-19 23:00:30.756668517 +0000 UTC m=+26.099248925" observedRunningTime="2025-11-19 23:00:31.097930714 +0000 UTC m=+26.440511171" watchObservedRunningTime="2025-11-19 23:00:37.132263734 +0000 UTC m=+32.474844134"
	Nov 19 23:00:43 embed-certs-044665 kubelet[788]: I1119 23:00:43.123735     788 scope.go:117] "RemoveContainer" containerID="6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242"
	Nov 19 23:00:46 embed-certs-044665 kubelet[788]: I1119 23:00:46.063507     788 scope.go:117] "RemoveContainer" containerID="78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	Nov 19 23:00:46 embed-certs-044665 kubelet[788]: E1119 23:00:46.064190     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:00:56 embed-certs-044665 kubelet[788]: I1119 23:00:56.888128     788 scope.go:117] "RemoveContainer" containerID="78b59efcdc208b805d9728dfaf2415d44a5a21987401f359e234e5f289bbf803"
	Nov 19 23:00:56 embed-certs-044665 kubelet[788]: E1119 23:00:56.888321     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ppg2g_kubernetes-dashboard(d5c12c99-8f33-4930-9791-95d621818711)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ppg2g" podUID="d5c12c99-8f33-4930-9791-95d621818711"
	Nov 19 23:01:01 embed-certs-044665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 23:01:01 embed-certs-044665 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 23:01:01 embed-certs-044665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [da3c06bf1658f0eaaa170dc1d4b37e9a0e6138c18f8111f4802b6a78c62ac707] <==
	2025/11/19 23:00:30 Using namespace: kubernetes-dashboard
	2025/11/19 23:00:30 Using in-cluster config to connect to apiserver
	2025/11/19 23:00:30 Using secret token for csrf signing
	2025/11/19 23:00:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 23:00:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 23:00:30 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 23:00:30 Generating JWE encryption key
	2025/11/19 23:00:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 23:00:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 23:00:31 Initializing JWE encryption key from synchronized object
	2025/11/19 23:00:31 Creating in-cluster Sidecar client
	2025/11/19 23:00:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 23:00:31 Serving insecurely on HTTP port: 9090
	2025/11/19 23:01:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 23:00:30 Starting overwatch
	
	
	==> storage-provisioner [046253f691f1c5e38c86edc820fa15d4f33bbe6b693ab4687901755a2fb83ee7] <==
	I1119 23:00:43.182499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 23:00:43.194262       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 23:00:43.194817       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 23:00:43.198792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:46.654530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:50.915479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:54.513950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:00:57.567503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:00.589994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:00.598359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:00.598596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 23:01:00.600064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-044665_776f180c-4fe3-47de-84e9-40ee82230b09!
	I1119 23:01:00.598656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5b69f9e-50f8-4cbb-93e0-ac4960fffe1d", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-044665_776f180c-4fe3-47de-84e9-40ee82230b09 became leader
	W1119 23:01:00.607320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:00.614277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:00.702080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-044665_776f180c-4fe3-47de-84e9-40ee82230b09!
	W1119 23:01:02.617840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:02.623016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:04.626190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:04.631579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:06.636079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:06.641785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6990f841c7b944aa089d35bb782bb72dc9d89cb0be5ebc0461b759210b4bf242] <==
	I1119 23:00:12.580098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 23:00:42.583321       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-044665 -n embed-certs-044665
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-044665 -n embed-certs-044665: exit status 2 (370.029935ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-044665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-841969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-841969 --alsologtostderr -v=1: exit status 80 (2.147280586s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-841969 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:01:12.761601 1079481 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:01:12.761806 1079481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:12.761821 1079481 out.go:374] Setting ErrFile to fd 2...
	I1119 23:01:12.761827 1079481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:12.762098 1079481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:01:12.762383 1079481 out.go:368] Setting JSON to false
	I1119 23:01:12.762420 1079481 mustload.go:66] Loading cluster: default-k8s-diff-port-841969
	I1119 23:01:12.763052 1079481 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:12.763657 1079481 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-841969 --format={{.State.Status}}
	I1119 23:01:12.784248 1079481 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:01:12.784577 1079481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:12.884392 1079481 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:58 SystemTime:2025-11-19 23:01:12.873654107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:12.885116 1079481 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-841969 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 23:01:12.893831 1079481 out.go:179] * Pausing node default-k8s-diff-port-841969 ... 
	I1119 23:01:12.898269 1079481 host.go:66] Checking if "default-k8s-diff-port-841969" exists ...
	I1119 23:01:12.898652 1079481 ssh_runner.go:195] Run: systemctl --version
	I1119 23:01:12.898711 1079481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-841969
	I1119 23:01:12.916782 1079481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33876 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/default-k8s-diff-port-841969/id_rsa Username:docker}
	I1119 23:01:13.024389 1079481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:01:13.043348 1079481 pause.go:52] kubelet running: true
	I1119 23:01:13.043423 1079481 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:01:13.346436 1079481 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:01:13.346531 1079481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:01:13.453115 1079481 cri.go:89] found id: "43bf488bf860dd3706258502ba7df99909404dcb85083c3969e024ce8d42f62d"
	I1119 23:01:13.453144 1079481 cri.go:89] found id: "2c7c69bd383a24d8f61d31bcd131f8cd9735445766b7fdf4afb0bc3df5e95e00"
	I1119 23:01:13.453149 1079481 cri.go:89] found id: "9445f4a3071f735b76fe6e727b1751f6879b126daf5d16886837fd4e7f8508bf"
	I1119 23:01:13.453154 1079481 cri.go:89] found id: "8d7655df3929094b8bfb0a3d41a9d7a3521de9bba6fc3ede8a051d2aab2bc56f"
	I1119 23:01:13.453158 1079481 cri.go:89] found id: "05ba740d7b534a1eb6ce057e9b1d89c6a1d15b2b00dbb0bc976f06d1cf0a1213"
	I1119 23:01:13.453162 1079481 cri.go:89] found id: "52bfb6272ad18315a205a597275a2908c50905792855fa02c474eb334dde7033"
	I1119 23:01:13.453165 1079481 cri.go:89] found id: "0f65aa748a61e533d7004d796a63a2ca937a30f669219b777e78d681df3e741a"
	I1119 23:01:13.453168 1079481 cri.go:89] found id: "868c86f80fac77993d4e41965587d02dd422bfe189be2a35461673dd2cfa1aef"
	I1119 23:01:13.453171 1079481 cri.go:89] found id: "ac53ca3801483eeadc872ef523a919c2a27248a87fcca348b3677b815e5cdc99"
	I1119 23:01:13.453178 1079481 cri.go:89] found id: "9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755"
	I1119 23:01:13.453181 1079481 cri.go:89] found id: "dab8910b75f812e5f5ebad9ae21982e1ccaaf0c104cfae67f601b80b5213f688"
	I1119 23:01:13.453185 1079481 cri.go:89] found id: ""
	I1119 23:01:13.453235 1079481 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:01:13.468753 1079481 retry.go:31] will retry after 235.439007ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:13Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:01:13.705227 1079481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:01:13.718825 1079481 pause.go:52] kubelet running: false
	I1119 23:01:13.718919 1079481 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:01:13.918265 1079481 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:01:13.918353 1079481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:01:14.004884 1079481 cri.go:89] found id: "43bf488bf860dd3706258502ba7df99909404dcb85083c3969e024ce8d42f62d"
	I1119 23:01:14.004915 1079481 cri.go:89] found id: "2c7c69bd383a24d8f61d31bcd131f8cd9735445766b7fdf4afb0bc3df5e95e00"
	I1119 23:01:14.004920 1079481 cri.go:89] found id: "9445f4a3071f735b76fe6e727b1751f6879b126daf5d16886837fd4e7f8508bf"
	I1119 23:01:14.004924 1079481 cri.go:89] found id: "8d7655df3929094b8bfb0a3d41a9d7a3521de9bba6fc3ede8a051d2aab2bc56f"
	I1119 23:01:14.004928 1079481 cri.go:89] found id: "05ba740d7b534a1eb6ce057e9b1d89c6a1d15b2b00dbb0bc976f06d1cf0a1213"
	I1119 23:01:14.004932 1079481 cri.go:89] found id: "52bfb6272ad18315a205a597275a2908c50905792855fa02c474eb334dde7033"
	I1119 23:01:14.004935 1079481 cri.go:89] found id: "0f65aa748a61e533d7004d796a63a2ca937a30f669219b777e78d681df3e741a"
	I1119 23:01:14.004939 1079481 cri.go:89] found id: "868c86f80fac77993d4e41965587d02dd422bfe189be2a35461673dd2cfa1aef"
	I1119 23:01:14.004942 1079481 cri.go:89] found id: "ac53ca3801483eeadc872ef523a919c2a27248a87fcca348b3677b815e5cdc99"
	I1119 23:01:14.004950 1079481 cri.go:89] found id: "9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755"
	I1119 23:01:14.004953 1079481 cri.go:89] found id: "dab8910b75f812e5f5ebad9ae21982e1ccaaf0c104cfae67f601b80b5213f688"
	I1119 23:01:14.004957 1079481 cri.go:89] found id: ""
	I1119 23:01:14.005022 1079481 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:01:14.020839 1079481 retry.go:31] will retry after 473.269117ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:14Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:01:14.494490 1079481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:01:14.508941 1079481 pause.go:52] kubelet running: false
	I1119 23:01:14.509008 1079481 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:01:14.716312 1079481 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:01:14.716397 1079481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:01:14.804777 1079481 cri.go:89] found id: "43bf488bf860dd3706258502ba7df99909404dcb85083c3969e024ce8d42f62d"
	I1119 23:01:14.804801 1079481 cri.go:89] found id: "2c7c69bd383a24d8f61d31bcd131f8cd9735445766b7fdf4afb0bc3df5e95e00"
	I1119 23:01:14.804807 1079481 cri.go:89] found id: "9445f4a3071f735b76fe6e727b1751f6879b126daf5d16886837fd4e7f8508bf"
	I1119 23:01:14.804811 1079481 cri.go:89] found id: "8d7655df3929094b8bfb0a3d41a9d7a3521de9bba6fc3ede8a051d2aab2bc56f"
	I1119 23:01:14.804814 1079481 cri.go:89] found id: "05ba740d7b534a1eb6ce057e9b1d89c6a1d15b2b00dbb0bc976f06d1cf0a1213"
	I1119 23:01:14.804818 1079481 cri.go:89] found id: "52bfb6272ad18315a205a597275a2908c50905792855fa02c474eb334dde7033"
	I1119 23:01:14.804821 1079481 cri.go:89] found id: "0f65aa748a61e533d7004d796a63a2ca937a30f669219b777e78d681df3e741a"
	I1119 23:01:14.804829 1079481 cri.go:89] found id: "868c86f80fac77993d4e41965587d02dd422bfe189be2a35461673dd2cfa1aef"
	I1119 23:01:14.804832 1079481 cri.go:89] found id: "ac53ca3801483eeadc872ef523a919c2a27248a87fcca348b3677b815e5cdc99"
	I1119 23:01:14.804846 1079481 cri.go:89] found id: "9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755"
	I1119 23:01:14.804853 1079481 cri.go:89] found id: "dab8910b75f812e5f5ebad9ae21982e1ccaaf0c104cfae67f601b80b5213f688"
	I1119 23:01:14.804857 1079481 cri.go:89] found id: ""
	I1119 23:01:14.804907 1079481 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:01:14.822200 1079481 out.go:203] 
	W1119 23:01:14.825630 1079481 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:01:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 23:01:14.825653 1079481 out.go:285] * 
	* 
	W1119 23:01:14.832658 1079481 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 23:01:14.835776 1079481 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-841969 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-841969
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-841969:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90",
	        "Created": "2025-11-19T22:58:26.666905644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1074988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T23:00:09.464031053Z",
	            "FinishedAt": "2025-11-19T23:00:08.250269678Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/hostname",
	        "HostsPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/hosts",
	        "LogPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90-json.log",
	        "Name": "/default-k8s-diff-port-841969",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-841969:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-841969",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90",
	                "LowerDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-841969",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-841969/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-841969",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-841969",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-841969",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff563e03c7404133b980fcd952c0c0e805e17ebd297f0b3337ba9aecc04346c",
	            "SandboxKey": "/var/run/docker/netns/dff563e03c74",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-841969": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:f4:a3:09:c0:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2c6f4446420675e07c5c2c03d34bfff2c1cc2a3ba7ca61bbc8161387b161d43",
	                    "EndpointID": "22f5d7cf560c5759633bc775637bba1c8f18ab1a3a1fb0cc2e1f05971e9d18fc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-841969",
	                        "20b80382d56c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969: exit status 2 (408.173985ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-841969 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-841969 logs -n 25: (1.525446136s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                                                                                                    │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-553369                                                                                                                                                                                                               │ disable-driver-mounts-553369 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-841969 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ embed-certs-044665 image list --format=json                                                                                                                                                                                                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p embed-certs-044665 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ image   │ default-k8s-diff-port-841969 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-841969 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:01:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:01:11.749412 1079225 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:01:11.749528 1079225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:11.749539 1079225 out.go:374] Setting ErrFile to fd 2...
	I1119 23:01:11.749544 1079225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:11.749820 1079225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:01:11.750248 1079225 out.go:368] Setting JSON to false
	I1119 23:01:11.751282 1079225 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17001,"bootTime":1763576271,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 23:01:11.751354 1079225 start.go:143] virtualization:  
	I1119 23:01:11.755448 1079225 out.go:179] * [newest-cni-467060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 23:01:11.759854 1079225 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:01:11.759966 1079225 notify.go:221] Checking for updates...
	I1119 23:01:11.766173 1079225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:01:11.769388 1079225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:01:11.772435 1079225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 23:01:11.775578 1079225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 23:01:11.778530 1079225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:01:11.782029 1079225 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:11.782160 1079225 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:01:11.816798 1079225 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 23:01:11.816932 1079225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:11.879177 1079225 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:01:11.869783182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:11.879284 1079225 docker.go:319] overlay module found
	I1119 23:01:11.882444 1079225 out.go:179] * Using the docker driver based on user configuration
	I1119 23:01:11.885575 1079225 start.go:309] selected driver: docker
	I1119 23:01:11.885598 1079225 start.go:930] validating driver "docker" against <nil>
	I1119 23:01:11.885613 1079225 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:01:11.886374 1079225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:11.940865 1079225 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:01:11.931836944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:11.941041 1079225 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 23:01:11.941065 1079225 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 23:01:11.941299 1079225 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 23:01:11.944369 1079225 out.go:179] * Using Docker driver with root privileges
	I1119 23:01:11.947256 1079225 cni.go:84] Creating CNI manager for ""
	I1119 23:01:11.947323 1079225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:01:11.947338 1079225 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 23:01:11.947419 1079225 start.go:353] cluster config:
	{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:01:11.952296 1079225 out.go:179] * Starting "newest-cni-467060" primary control-plane node in "newest-cni-467060" cluster
	I1119 23:01:11.955409 1079225 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 23:01:11.958500 1079225 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 23:01:11.961310 1079225 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:01:11.961366 1079225 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 23:01:11.961392 1079225 cache.go:65] Caching tarball of preloaded images
	I1119 23:01:11.961393 1079225 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 23:01:11.961485 1079225 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 23:01:11.961496 1079225 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:01:11.961605 1079225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json ...
	I1119 23:01:11.961623 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json: {Name:mkfe2f4968ef5f373981866c5b71b97eec2a612b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:11.981990 1079225 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 23:01:11.982012 1079225 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 23:01:11.982031 1079225 cache.go:243] Successfully downloaded all kic artifacts
	I1119 23:01:11.982060 1079225 start.go:360] acquireMachinesLock for newest-cni-467060: {Name:mk24f21142ba5d810994dced903fd755f13fe1ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:01:11.982268 1079225 start.go:364] duration metric: took 184.009µs to acquireMachinesLock for "newest-cni-467060"
	I1119 23:01:11.982313 1079225 start.go:93] Provisioning new machine with config: &{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:01:11.982395 1079225 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.603509341Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.609731054Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.609767067Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.609803309Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.626509844Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.62654804Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.626575822Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.631129296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.631163791Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.631188316Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.639368553Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.6394034Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.292277931Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a1856c0f-430d-4696-841a-7ac8f69d5aa2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.306783791Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=078a1e2d-369a-4769-8b37-940312a42b04 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.311856712Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper" id=8e7a29b8-7cd0-4f51-97a1-9f0c5b0b73e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.312011306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.338268649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.340621925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.367249372Z" level=info msg="Created container 9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper" id=8e7a29b8-7cd0-4f51-97a1-9f0c5b0b73e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.368356442Z" level=info msg="Starting container: 9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755" id=6dcffec9-b3b3-44eb-9668-1f9a160a1564 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.370566577Z" level=info msg="Started container" PID=1756 containerID=9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper id=6dcffec9-b3b3-44eb-9668-1f9a160a1564 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a
	Nov 19 23:01:12 default-k8s-diff-port-841969 conmon[1754]: conmon 9f45a6d6d12c58bc0f1d <ninfo>: container 1756 exited with status 1
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.639758145Z" level=info msg="Removing container: e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4" id=dd108171-56cb-40dd-ab36-4d00688d5809 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.66783934Z" level=info msg="Error loading conmon cgroup of container e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4: cgroup deleted" id=dd108171-56cb-40dd-ab36-4d00688d5809 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.670732082Z" level=info msg="Removed container e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper" id=dd108171-56cb-40dd-ab36-4d00688d5809 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	9f45a6d6d12c5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   f08b5506e417d       dashboard-metrics-scraper-6ffb444bf9-gpgmv             kubernetes-dashboard
	43bf488bf860d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   9cc6a544253b6       storage-provisioner                                    kube-system
	dab8910b75f81       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   d80757cf1705d       kubernetes-dashboard-855c9754f9-9xf4k                  kubernetes-dashboard
	2c7c69bd383a2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   bf05f78038f13       coredns-66bc5c9577-zkjxn                               kube-system
	d114359bf1912       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   93aac82972119       busybox                                                default
	9445f4a3071f7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   b7f20b1c08cd5       kube-proxy-fbmdp                                       kube-system
	8d7655df39290       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   9831a1e001400       kindnet-65cjg                                          kube-system
	05ba740d7b534       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   9cc6a544253b6       storage-provisioner                                    kube-system
	52bfb6272ad18       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   5988204222ffb       etcd-default-k8s-diff-port-841969                      kube-system
	0f65aa748a61e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   7eb71554dcd78       kube-controller-manager-default-k8s-diff-port-841969   kube-system
	868c86f80fac7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   54deb8a2103e1       kube-scheduler-default-k8s-diff-port-841969            kube-system
	ac53ca3801483       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   b04eff87d00e7       kube-apiserver-default-k8s-diff-port-841969            kube-system
	
	
	==> coredns [2c7c69bd383a24d8f61d31bcd131f8cd9735445766b7fdf4afb0bc3df5e95e00] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38939 - 51811 "HINFO IN 1979189588570148994.6545036969100287332. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012857787s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-841969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-841969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-841969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_58_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:58:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-841969
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:01:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-841969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                8530e068-8eb5-4533-912c-551d1cf1fd1e
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-zkjxn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-841969                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-65cjg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-841969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-841969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-fbmdp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-841969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gpgmv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9xf4k                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 46s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m28s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m21s                  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s                  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m21s                  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m17s                  node-controller  Node default-k8s-diff-port-841969 event: Registered Node default-k8s-diff-port-841969 in Controller
	  Normal   NodeReady                94s                    kubelet          Node default-k8s-diff-port-841969 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node default-k8s-diff-port-841969 event: Registered Node default-k8s-diff-port-841969 in Controller
	
	
	==> dmesg <==
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	[Nov19 23:00] overlayfs: idmapped layers are currently not supported
	[ +12.401560] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [52bfb6272ad18315a205a597275a2908c50905792855fa02c474eb334dde7033] <==
	{"level":"warn","ts":"2025-11-19T23:00:22.614633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.674917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.691415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.714315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.750990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.791580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.823177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.844253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.879400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.984568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.019789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.118648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.162920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.240972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.267369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.313683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.334047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.375438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.424468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.453914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.490661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.528265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.589521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.653170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.863419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42344","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:16 up  4:43,  0 user,  load average: 2.87, 3.07, 2.59
	Linux default-k8s-diff-port-841969 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d7655df3929094b8bfb0a3d41a9d7a3521de9bba6fc3ede8a051d2aab2bc56f] <==
	I1119 23:00:27.268577       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 23:00:27.279147       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 23:00:27.279294       1 main.go:148] setting mtu 1500 for CNI 
	I1119 23:00:27.279306       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 23:00:27.279322       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T23:00:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 23:00:27.605516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 23:00:27.611098       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 23:00:27.611194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 23:00:27.611666       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 23:00:57.606215       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 23:00:57.611810       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 23:00:57.624289       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 23:00:57.624311       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 23:00:59.211776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 23:00:59.211811       1 metrics.go:72] Registering metrics
	I1119 23:00:59.211878       1 controller.go:711] "Syncing nftables rules"
	I1119 23:01:07.602946       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 23:01:07.602991       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ac53ca3801483eeadc872ef523a919c2a27248a87fcca348b3677b815e5cdc99] <==
	I1119 23:00:25.715257       1 aggregator.go:171] initial CRD sync complete...
	I1119 23:00:25.715282       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 23:00:25.715289       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 23:00:25.715295       1 cache.go:39] Caches are synced for autoregister controller
	I1119 23:00:25.750513       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 23:00:25.763562       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:00:25.763677       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 23:00:25.763685       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 23:00:25.787547       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:00:25.807362       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:00:25.844450       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:00:25.844478       1 policy_source.go:240] refreshing policies
	I1119 23:00:25.847749       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 23:00:25.881008       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:00:26.185098       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 23:00:26.410267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:00:27.538706       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 23:00:27.646406       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:00:27.769431       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:00:27.791274       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:00:28.180572       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.199.215"}
	I1119 23:00:28.292974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.95.58"}
	I1119 23:00:30.271640       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:00:30.321349       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:00:30.405716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0f65aa748a61e533d7004d796a63a2ca937a30f669219b777e78d681df3e741a] <==
	I1119 23:00:29.911421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 23:00:29.915365       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 23:00:29.905656       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 23:00:29.914013       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 23:00:29.916489       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 23:00:29.918349       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 23:00:29.924094       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 23:00:29.924292       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 23:00:29.924562       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 23:00:29.929414       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 23:00:29.929863       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 23:00:29.933209       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 23:00:29.934171       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 23:00:29.936900       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 23:00:29.939974       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 23:00:29.945914       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 23:00:29.950157       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 23:00:29.956107       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 23:00:29.965692       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:29.959539       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:29.957631       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 23:00:30.031819       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:00:30.031929       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:00:30.031962       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:00:30.038240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9445f4a3071f735b76fe6e727b1751f6879b126daf5d16886837fd4e7f8508bf] <==
	I1119 23:00:29.503638       1 server_linux.go:53] "Using iptables proxy"
	I1119 23:00:29.600347       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:00:29.701304       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:00:29.701404       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 23:00:29.701503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:00:29.986209       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 23:00:29.986272       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:00:30.006407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:00:30.007004       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:00:30.007298       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:30.009973       1 config.go:200] "Starting service config controller"
	I1119 23:00:30.010081       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:00:30.010131       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:00:30.010159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:00:30.010202       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:00:30.010233       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:00:30.011353       1 config.go:309] "Starting node config controller"
	I1119 23:00:30.011457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:00:30.011492       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:00:30.113468       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:00:30.113525       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:00:30.113572       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [868c86f80fac77993d4e41965587d02dd422bfe189be2a35461673dd2cfa1aef] <==
	I1119 23:00:23.276799       1 serving.go:386] Generated self-signed cert in-memory
	I1119 23:00:29.318829       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:00:29.318861       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:29.340355       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:00:29.340998       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:00:29.341065       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:29.363776       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:29.341076       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 23:00:29.364502       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 23:00:29.341041       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 23:00:29.371638       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 23:00:29.473057       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 23:00:29.474289       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:29.474380       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:30.515550     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9kws\" (UniqueName: \"kubernetes.io/projected/89d84645-3a4c-455f-95a0-a0770b7eff59-kube-api-access-w9kws\") pod \"kubernetes-dashboard-855c9754f9-9xf4k\" (UID: \"89d84645-3a4c-455f-95a0-a0770b7eff59\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9xf4k"
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:30.616236     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fca0b58b-a91e-4a26-9ce4-60c8201a8cd7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gpgmv\" (UID: \"fca0b58b-a91e-4a26-9ce4-60c8201a8cd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv"
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:30.616439     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qq7\" (UniqueName: \"kubernetes.io/projected/fca0b58b-a91e-4a26-9ce4-60c8201a8cd7-kube-api-access-s5qq7\") pod \"dashboard-metrics-scraper-6ffb444bf9-gpgmv\" (UID: \"fca0b58b-a91e-4a26-9ce4-60c8201a8cd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv"
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: W1119 23:00:30.900777     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/crio-f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a WatchSource:0}: Error finding container f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a: Status 404 returned error can't find the container with id f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a
	Nov 19 23:00:40 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:40.495767     795 scope.go:117] "RemoveContainer" containerID="7c7caa06c904a2fe6e165ac40509154ac7e6e8e728cae8f1fdd2556ee838916a"
	Nov 19 23:00:40 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:40.516921     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9xf4k" podStartSLOduration=5.324963093 podStartE2EDuration="10.51552367s" podCreationTimestamp="2025-11-19 23:00:30 +0000 UTC" firstStartedPulling="2025-11-19 23:00:30.862261843 +0000 UTC m=+13.998694665" lastFinishedPulling="2025-11-19 23:00:36.052822403 +0000 UTC m=+19.189255242" observedRunningTime="2025-11-19 23:00:36.496804328 +0000 UTC m=+19.633237142" watchObservedRunningTime="2025-11-19 23:00:40.51552367 +0000 UTC m=+23.651956484"
	Nov 19 23:00:41 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:41.499834     795 scope.go:117] "RemoveContainer" containerID="7c7caa06c904a2fe6e165ac40509154ac7e6e8e728cae8f1fdd2556ee838916a"
	Nov 19 23:00:41 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:41.500444     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:41 default-k8s-diff-port-841969 kubelet[795]: E1119 23:00:41.500729     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:00:42 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:42.503639     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:42 default-k8s-diff-port-841969 kubelet[795]: E1119 23:00:42.503824     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:00:50 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:50.815263     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:51 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:51.532578     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:51 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:51.532836     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:00:51 default-k8s-diff-port-841969 kubelet[795]: E1119 23:00:51.533415     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:00:58 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:58.552678     795 scope.go:117] "RemoveContainer" containerID="05ba740d7b534a1eb6ce057e9b1d89c6a1d15b2b00dbb0bc976f06d1cf0a1213"
	Nov 19 23:01:00 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:00.814993     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:01:00 default-k8s-diff-port-841969 kubelet[795]: E1119 23:01:00.815170     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:12.291606     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:12.599109     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:12.608549     795 scope.go:117] "RemoveContainer" containerID="9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: E1119 23:01:12.623029     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:01:13 default-k8s-diff-port-841969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 23:01:13 default-k8s-diff-port-841969 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 23:01:13 default-k8s-diff-port-841969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [dab8910b75f812e5f5ebad9ae21982e1ccaaf0c104cfae67f601b80b5213f688] <==
	2025/11/19 23:00:36 Using namespace: kubernetes-dashboard
	2025/11/19 23:00:36 Using in-cluster config to connect to apiserver
	2025/11/19 23:00:36 Using secret token for csrf signing
	2025/11/19 23:00:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 23:00:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 23:00:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 23:00:36 Generating JWE encryption key
	2025/11/19 23:00:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 23:00:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 23:00:36 Initializing JWE encryption key from synchronized object
	2025/11/19 23:00:36 Creating in-cluster Sidecar client
	2025/11/19 23:00:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 23:00:36 Serving insecurely on HTTP port: 9090
	2025/11/19 23:01:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 23:00:36 Starting overwatch
	
	
	==> storage-provisioner [05ba740d7b534a1eb6ce057e9b1d89c6a1d15b2b00dbb0bc976f06d1cf0a1213] <==
	I1119 23:00:27.971912       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 23:00:57.989174       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [43bf488bf860dd3706258502ba7df99909404dcb85083c3969e024ce8d42f62d] <==
	I1119 23:00:58.600193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 23:00:58.613896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 23:00:58.614021       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 23:00:58.617116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:02.072029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:06.332729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:09.932413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:12.987839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:16.011529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:16.025021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:16.025176       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 23:01:16.027785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-841969_382bc7ad-3351-4cd9-bc9e-b4997c7103c2!
	I1119 23:01:16.028733       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"436ac8e0-d182-4d76-a461-e7e8abb5704d", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-841969_382bc7ad-3351-4cd9-bc9e-b4997c7103c2 became leader
	W1119 23:01:16.029504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:16.042615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:16.128002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-841969_382bc7ad-3351-4cd9-bc9e-b4997c7103c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969: exit status 2 (407.293415ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-841969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-841969
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-841969:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90",
	        "Created": "2025-11-19T22:58:26.666905644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1074988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T23:00:09.464031053Z",
	            "FinishedAt": "2025-11-19T23:00:08.250269678Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/hostname",
	        "HostsPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/hosts",
	        "LogPath": "/var/lib/docker/containers/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90-json.log",
	        "Name": "/default-k8s-diff-port-841969",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-841969:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-841969",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90",
	                "LowerDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac51790851579d8a9be5d265e53741ded396ecd9e70ddff285893347a2c13f85/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-841969",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-841969/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-841969",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-841969",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-841969",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff563e03c7404133b980fcd952c0c0e805e17ebd297f0b3337ba9aecc04346c",
	            "SandboxKey": "/var/run/docker/netns/dff563e03c74",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-841969": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:f4:a3:09:c0:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2c6f4446420675e07c5c2c03d34bfff2c1cc2a3ba7ca61bbc8161387b161d43",
	                    "EndpointID": "22f5d7cf560c5759633bc775637bba1c8f18ab1a3a1fb0cc2e1f05971e9d18fc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-841969",
	                        "20b80382d56c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969: exit status 2 (429.78706ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-841969 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-841969 logs -n 25: (2.160094518s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-191961 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │                     │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:57 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p old-k8s-version-191961                                                                                                                                                                                                                     │ old-k8s-version-191961       │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ image   │ no-preload-018508 image list --format=json                                                                                                                                                                                                    │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-553369                                                                                                                                                                                                               │ disable-driver-mounts-553369 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-841969 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ embed-certs-044665 image list --format=json                                                                                                                                                                                                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p embed-certs-044665 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ image   │ default-k8s-diff-port-841969 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-841969 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:01:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:01:11.749412 1079225 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:01:11.749528 1079225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:11.749539 1079225 out.go:374] Setting ErrFile to fd 2...
	I1119 23:01:11.749544 1079225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:11.749820 1079225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:01:11.750248 1079225 out.go:368] Setting JSON to false
	I1119 23:01:11.751282 1079225 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17001,"bootTime":1763576271,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 23:01:11.751354 1079225 start.go:143] virtualization:  
	I1119 23:01:11.755448 1079225 out.go:179] * [newest-cni-467060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 23:01:11.759854 1079225 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:01:11.759966 1079225 notify.go:221] Checking for updates...
	I1119 23:01:11.766173 1079225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:01:11.769388 1079225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:01:11.772435 1079225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 23:01:11.775578 1079225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 23:01:11.778530 1079225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:01:11.782029 1079225 config.go:182] Loaded profile config "default-k8s-diff-port-841969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:11.782160 1079225 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:01:11.816798 1079225 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 23:01:11.816932 1079225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:11.879177 1079225 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:01:11.869783182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:11.879284 1079225 docker.go:319] overlay module found
	I1119 23:01:11.882444 1079225 out.go:179] * Using the docker driver based on user configuration
	I1119 23:01:11.885575 1079225 start.go:309] selected driver: docker
	I1119 23:01:11.885598 1079225 start.go:930] validating driver "docker" against <nil>
	I1119 23:01:11.885613 1079225 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:01:11.886374 1079225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:11.940865 1079225 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:01:11.931836944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:11.941041 1079225 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 23:01:11.941065 1079225 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 23:01:11.941299 1079225 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 23:01:11.944369 1079225 out.go:179] * Using Docker driver with root privileges
	I1119 23:01:11.947256 1079225 cni.go:84] Creating CNI manager for ""
	I1119 23:01:11.947323 1079225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:01:11.947338 1079225 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 23:01:11.947419 1079225 start.go:353] cluster config:
	{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:01:11.952296 1079225 out.go:179] * Starting "newest-cni-467060" primary control-plane node in "newest-cni-467060" cluster
	I1119 23:01:11.955409 1079225 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 23:01:11.958500 1079225 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 23:01:11.961310 1079225 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:01:11.961366 1079225 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 23:01:11.961392 1079225 cache.go:65] Caching tarball of preloaded images
	I1119 23:01:11.961393 1079225 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 23:01:11.961485 1079225 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 23:01:11.961496 1079225 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:01:11.961605 1079225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json ...
	I1119 23:01:11.961623 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json: {Name:mkfe2f4968ef5f373981866c5b71b97eec2a612b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:11.981990 1079225 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 23:01:11.982012 1079225 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 23:01:11.982031 1079225 cache.go:243] Successfully downloaded all kic artifacts
	I1119 23:01:11.982060 1079225 start.go:360] acquireMachinesLock for newest-cni-467060: {Name:mk24f21142ba5d810994dced903fd755f13fe1ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:01:11.982268 1079225 start.go:364] duration metric: took 184.009µs to acquireMachinesLock for "newest-cni-467060"
	I1119 23:01:11.982313 1079225 start.go:93] Provisioning new machine with config: &{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:01:11.982395 1079225 start.go:125] createHost starting for "" (driver="docker")
	I1119 23:01:11.985897 1079225 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 23:01:11.986151 1079225 start.go:159] libmachine.API.Create for "newest-cni-467060" (driver="docker")
	I1119 23:01:11.986191 1079225 client.go:173] LocalClient.Create starting
	I1119 23:01:11.986272 1079225 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 23:01:11.986313 1079225 main.go:143] libmachine: Decoding PEM data...
	I1119 23:01:11.986333 1079225 main.go:143] libmachine: Parsing certificate...
	I1119 23:01:11.986401 1079225 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 23:01:11.986426 1079225 main.go:143] libmachine: Decoding PEM data...
	I1119 23:01:11.986444 1079225 main.go:143] libmachine: Parsing certificate...
	I1119 23:01:11.986847 1079225 cli_runner.go:164] Run: docker network inspect newest-cni-467060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 23:01:12.003569 1079225 cli_runner.go:211] docker network inspect newest-cni-467060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 23:01:12.003657 1079225 network_create.go:284] running [docker network inspect newest-cni-467060] to gather additional debugging logs...
	I1119 23:01:12.003681 1079225 cli_runner.go:164] Run: docker network inspect newest-cni-467060
	W1119 23:01:12.022946 1079225 cli_runner.go:211] docker network inspect newest-cni-467060 returned with exit code 1
	I1119 23:01:12.022979 1079225 network_create.go:287] error running [docker network inspect newest-cni-467060]: docker network inspect newest-cni-467060: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-467060 not found
	I1119 23:01:12.022995 1079225 network_create.go:289] output of [docker network inspect newest-cni-467060]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-467060 not found
	
	** /stderr **
	I1119 23:01:12.023115 1079225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:01:12.041773 1079225 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
	I1119 23:01:12.042147 1079225 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-409f9deb7199 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:64:cf:3b:93:91} reservation:<nil>}
	I1119 23:01:12.042509 1079225 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-601de6a5616d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:01:2f:20:8b:a3} reservation:<nil>}
	I1119 23:01:12.043131 1079225 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3e220}
	I1119 23:01:12.043168 1079225 network_create.go:124] attempt to create docker network newest-cni-467060 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 23:01:12.043230 1079225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-467060 newest-cni-467060
	I1119 23:01:12.105604 1079225 network_create.go:108] docker network newest-cni-467060 192.168.76.0/24 created
	I1119 23:01:12.105638 1079225 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-467060" container
	I1119 23:01:12.105727 1079225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 23:01:12.123284 1079225 cli_runner.go:164] Run: docker volume create newest-cni-467060 --label name.minikube.sigs.k8s.io=newest-cni-467060 --label created_by.minikube.sigs.k8s.io=true
	I1119 23:01:12.141200 1079225 oci.go:103] Successfully created a docker volume newest-cni-467060
	I1119 23:01:12.141301 1079225 cli_runner.go:164] Run: docker run --rm --name newest-cni-467060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-467060 --entrypoint /usr/bin/test -v newest-cni-467060:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 23:01:12.926716 1079225 oci.go:107] Successfully prepared a docker volume newest-cni-467060
	I1119 23:01:12.926790 1079225 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:01:12.926799 1079225 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 23:01:12.927013 1079225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-467060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.603509341Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.609731054Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.609767067Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.609803309Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.626509844Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.62654804Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.626575822Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.631129296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.631163791Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.631188316Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.639368553Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 23:01:07 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:07.6394034Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.292277931Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a1856c0f-430d-4696-841a-7ac8f69d5aa2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.306783791Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=078a1e2d-369a-4769-8b37-940312a42b04 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.311856712Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper" id=8e7a29b8-7cd0-4f51-97a1-9f0c5b0b73e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.312011306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.338268649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.340621925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.367249372Z" level=info msg="Created container 9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper" id=8e7a29b8-7cd0-4f51-97a1-9f0c5b0b73e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.368356442Z" level=info msg="Starting container: 9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755" id=6dcffec9-b3b3-44eb-9668-1f9a160a1564 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.370566577Z" level=info msg="Started container" PID=1756 containerID=9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper id=6dcffec9-b3b3-44eb-9668-1f9a160a1564 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a
	Nov 19 23:01:12 default-k8s-diff-port-841969 conmon[1754]: conmon 9f45a6d6d12c58bc0f1d <ninfo>: container 1756 exited with status 1
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.639758145Z" level=info msg="Removing container: e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4" id=dd108171-56cb-40dd-ab36-4d00688d5809 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.66783934Z" level=info msg="Error loading conmon cgroup of container e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4: cgroup deleted" id=dd108171-56cb-40dd-ab36-4d00688d5809 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 23:01:12 default-k8s-diff-port-841969 crio[662]: time="2025-11-19T23:01:12.670732082Z" level=info msg="Removed container e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv/dashboard-metrics-scraper" id=dd108171-56cb-40dd-ab36-4d00688d5809 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	9f45a6d6d12c5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   f08b5506e417d       dashboard-metrics-scraper-6ffb444bf9-gpgmv             kubernetes-dashboard
	43bf488bf860d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   9cc6a544253b6       storage-provisioner                                    kube-system
	dab8910b75f81       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   d80757cf1705d       kubernetes-dashboard-855c9754f9-9xf4k                  kubernetes-dashboard
	2c7c69bd383a2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   bf05f78038f13       coredns-66bc5c9577-zkjxn                               kube-system
	d114359bf1912       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   93aac82972119       busybox                                                default
	9445f4a3071f7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   b7f20b1c08cd5       kube-proxy-fbmdp                                       kube-system
	8d7655df39290       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   9831a1e001400       kindnet-65cjg                                          kube-system
	05ba740d7b534       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   9cc6a544253b6       storage-provisioner                                    kube-system
	52bfb6272ad18       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   5988204222ffb       etcd-default-k8s-diff-port-841969                      kube-system
	0f65aa748a61e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7eb71554dcd78       kube-controller-manager-default-k8s-diff-port-841969   kube-system
	868c86f80fac7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   54deb8a2103e1       kube-scheduler-default-k8s-diff-port-841969            kube-system
	ac53ca3801483       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b04eff87d00e7       kube-apiserver-default-k8s-diff-port-841969            kube-system
	
	
	==> coredns [2c7c69bd383a24d8f61d31bcd131f8cd9735445766b7fdf4afb0bc3df5e95e00] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38939 - 51811 "HINFO IN 1979189588570148994.6545036969100287332. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012857787s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-841969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-841969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-841969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_58_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:58:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-841969
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:01:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:58:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:00:56 +0000   Wed, 19 Nov 2025 22:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-841969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                8530e068-8eb5-4533-912c-551d1cf1fd1e
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-zkjxn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-841969                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-65cjg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-841969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-841969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-fbmdp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-841969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gpgmv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9xf4k                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-841969 event: Registered Node default-k8s-diff-port-841969 in Controller
	  Normal   NodeReady                96s                    kubelet          Node default-k8s-diff-port-841969 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-841969 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node default-k8s-diff-port-841969 event: Registered Node default-k8s-diff-port-841969 in Controller
	
	
	==> dmesg <==
	[Nov19 22:37] overlayfs: idmapped layers are currently not supported
	[ +28.245949] overlayfs: idmapped layers are currently not supported
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	[Nov19 23:00] overlayfs: idmapped layers are currently not supported
	[ +12.401560] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [52bfb6272ad18315a205a597275a2908c50905792855fa02c474eb334dde7033] <==
	{"level":"warn","ts":"2025-11-19T23:00:22.614633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.674917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.691415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.714315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.750990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.791580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.823177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.844253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.879400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:22.984568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.019789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.118648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.162920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.240972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.267369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.313683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.334047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.375438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.424468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.453914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.490661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.528265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.589521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.653170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:00:23.863419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42344","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:19 up  4:43,  0 user,  load average: 2.87, 3.07, 2.59
	Linux default-k8s-diff-port-841969 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d7655df3929094b8bfb0a3d41a9d7a3521de9bba6fc3ede8a051d2aab2bc56f] <==
	I1119 23:00:27.268577       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 23:00:27.279147       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 23:00:27.279294       1 main.go:148] setting mtu 1500 for CNI 
	I1119 23:00:27.279306       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 23:00:27.279322       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T23:00:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 23:00:27.605516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 23:00:27.611098       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 23:00:27.611194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 23:00:27.611666       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 23:00:57.606215       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 23:00:57.611810       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 23:00:57.624289       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 23:00:57.624311       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 23:00:59.211776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 23:00:59.211811       1 metrics.go:72] Registering metrics
	I1119 23:00:59.211878       1 controller.go:711] "Syncing nftables rules"
	I1119 23:01:07.602946       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 23:01:07.602991       1 main.go:301] handling current node
	I1119 23:01:17.605003       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 23:01:17.605040       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ac53ca3801483eeadc872ef523a919c2a27248a87fcca348b3677b815e5cdc99] <==
	I1119 23:00:25.715257       1 aggregator.go:171] initial CRD sync complete...
	I1119 23:00:25.715282       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 23:00:25.715289       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 23:00:25.715295       1 cache.go:39] Caches are synced for autoregister controller
	I1119 23:00:25.750513       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 23:00:25.763562       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:00:25.763677       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 23:00:25.763685       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 23:00:25.787547       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:00:25.807362       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:00:25.844450       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:00:25.844478       1 policy_source.go:240] refreshing policies
	I1119 23:00:25.847749       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 23:00:25.881008       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:00:26.185098       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 23:00:26.410267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:00:27.538706       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 23:00:27.646406       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:00:27.769431       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:00:27.791274       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:00:28.180572       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.199.215"}
	I1119 23:00:28.292974       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.95.58"}
	I1119 23:00:30.271640       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:00:30.321349       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:00:30.405716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0f65aa748a61e533d7004d796a63a2ca937a30f669219b777e78d681df3e741a] <==
	I1119 23:00:29.911421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 23:00:29.915365       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 23:00:29.905656       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 23:00:29.914013       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 23:00:29.916489       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 23:00:29.918349       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 23:00:29.924094       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 23:00:29.924292       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 23:00:29.924562       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 23:00:29.929414       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 23:00:29.929863       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 23:00:29.933209       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 23:00:29.934171       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 23:00:29.936900       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 23:00:29.939974       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 23:00:29.945914       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 23:00:29.950157       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 23:00:29.956107       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 23:00:29.965692       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:29.959539       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:00:29.957631       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 23:00:30.031819       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:00:30.031929       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:00:30.031962       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:00:30.038240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9445f4a3071f735b76fe6e727b1751f6879b126daf5d16886837fd4e7f8508bf] <==
	I1119 23:00:29.503638       1 server_linux.go:53] "Using iptables proxy"
	I1119 23:00:29.600347       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:00:29.701304       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:00:29.701404       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 23:00:29.701503       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:00:29.986209       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 23:00:29.986272       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:00:30.006407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:00:30.007004       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:00:30.007298       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:30.009973       1 config.go:200] "Starting service config controller"
	I1119 23:00:30.010081       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:00:30.010131       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:00:30.010159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:00:30.010202       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:00:30.010233       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:00:30.011353       1 config.go:309] "Starting node config controller"
	I1119 23:00:30.011457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:00:30.011492       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:00:30.113468       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:00:30.113525       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:00:30.113572       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [868c86f80fac77993d4e41965587d02dd422bfe189be2a35461673dd2cfa1aef] <==
	I1119 23:00:23.276799       1 serving.go:386] Generated self-signed cert in-memory
	I1119 23:00:29.318829       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:00:29.318861       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:00:29.340355       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:00:29.340998       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:00:29.341065       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:29.363776       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:29.341076       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 23:00:29.364502       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 23:00:29.341041       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 23:00:29.371638       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 23:00:29.473057       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 23:00:29.474289       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:00:29.474380       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:30.515550     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9kws\" (UniqueName: \"kubernetes.io/projected/89d84645-3a4c-455f-95a0-a0770b7eff59-kube-api-access-w9kws\") pod \"kubernetes-dashboard-855c9754f9-9xf4k\" (UID: \"89d84645-3a4c-455f-95a0-a0770b7eff59\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9xf4k"
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:30.616236     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fca0b58b-a91e-4a26-9ce4-60c8201a8cd7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gpgmv\" (UID: \"fca0b58b-a91e-4a26-9ce4-60c8201a8cd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv"
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:30.616439     795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qq7\" (UniqueName: \"kubernetes.io/projected/fca0b58b-a91e-4a26-9ce4-60c8201a8cd7-kube-api-access-s5qq7\") pod \"dashboard-metrics-scraper-6ffb444bf9-gpgmv\" (UID: \"fca0b58b-a91e-4a26-9ce4-60c8201a8cd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv"
	Nov 19 23:00:30 default-k8s-diff-port-841969 kubelet[795]: W1119 23:00:30.900777     795 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/20b80382d56c547f361111bf74cada8f730076bc65160b6cffb42558b1ea5c90/crio-f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a WatchSource:0}: Error finding container f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a: Status 404 returned error can't find the container with id f08b5506e417d440c754acc687262222bee55e9f3134e1278536752b0001849a
	Nov 19 23:00:40 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:40.495767     795 scope.go:117] "RemoveContainer" containerID="7c7caa06c904a2fe6e165ac40509154ac7e6e8e728cae8f1fdd2556ee838916a"
	Nov 19 23:00:40 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:40.516921     795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9xf4k" podStartSLOduration=5.324963093 podStartE2EDuration="10.51552367s" podCreationTimestamp="2025-11-19 23:00:30 +0000 UTC" firstStartedPulling="2025-11-19 23:00:30.862261843 +0000 UTC m=+13.998694665" lastFinishedPulling="2025-11-19 23:00:36.052822403 +0000 UTC m=+19.189255242" observedRunningTime="2025-11-19 23:00:36.496804328 +0000 UTC m=+19.633237142" watchObservedRunningTime="2025-11-19 23:00:40.51552367 +0000 UTC m=+23.651956484"
	Nov 19 23:00:41 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:41.499834     795 scope.go:117] "RemoveContainer" containerID="7c7caa06c904a2fe6e165ac40509154ac7e6e8e728cae8f1fdd2556ee838916a"
	Nov 19 23:00:41 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:41.500444     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:41 default-k8s-diff-port-841969 kubelet[795]: E1119 23:00:41.500729     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:00:42 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:42.503639     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:42 default-k8s-diff-port-841969 kubelet[795]: E1119 23:00:42.503824     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:00:50 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:50.815263     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:51 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:51.532578     795 scope.go:117] "RemoveContainer" containerID="c8be7427edc9448c54b6c79efd1cad9acd6c4c7ec1788bfeee053d8ef162c4ae"
	Nov 19 23:00:51 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:51.532836     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:00:51 default-k8s-diff-port-841969 kubelet[795]: E1119 23:00:51.533415     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:00:58 default-k8s-diff-port-841969 kubelet[795]: I1119 23:00:58.552678     795 scope.go:117] "RemoveContainer" containerID="05ba740d7b534a1eb6ce057e9b1d89c6a1d15b2b00dbb0bc976f06d1cf0a1213"
	Nov 19 23:01:00 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:00.814993     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:01:00 default-k8s-diff-port-841969 kubelet[795]: E1119 23:01:00.815170     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:12.291606     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:12.599109     795 scope.go:117] "RemoveContainer" containerID="e0005064dcf7b222d9d89a6a7c2e35089b148c5213b91ae243819f48c1c13cf4"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: I1119 23:01:12.608549     795 scope.go:117] "RemoveContainer" containerID="9f45a6d6d12c58bc0f1dc1166557ca2058ae075670901de943558ddae9293755"
	Nov 19 23:01:12 default-k8s-diff-port-841969 kubelet[795]: E1119 23:01:12.623029     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gpgmv_kubernetes-dashboard(fca0b58b-a91e-4a26-9ce4-60c8201a8cd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gpgmv" podUID="fca0b58b-a91e-4a26-9ce4-60c8201a8cd7"
	Nov 19 23:01:13 default-k8s-diff-port-841969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 23:01:13 default-k8s-diff-port-841969 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 23:01:13 default-k8s-diff-port-841969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [dab8910b75f812e5f5ebad9ae21982e1ccaaf0c104cfae67f601b80b5213f688] <==
	2025/11/19 23:00:36 Using namespace: kubernetes-dashboard
	2025/11/19 23:00:36 Using in-cluster config to connect to apiserver
	2025/11/19 23:00:36 Using secret token for csrf signing
	2025/11/19 23:00:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 23:00:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 23:00:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 23:00:36 Generating JWE encryption key
	2025/11/19 23:00:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 23:00:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 23:00:36 Initializing JWE encryption key from synchronized object
	2025/11/19 23:00:36 Creating in-cluster Sidecar client
	2025/11/19 23:00:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 23:00:36 Serving insecurely on HTTP port: 9090
	2025/11/19 23:01:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 23:00:36 Starting overwatch
	
	
	==> storage-provisioner [05ba740d7b534a1eb6ce057e9b1d89c6a1d15b2b00dbb0bc976f06d1cf0a1213] <==
	I1119 23:00:27.971912       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 23:00:57.989174       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [43bf488bf860dd3706258502ba7df99909404dcb85083c3969e024ce8d42f62d] <==
	I1119 23:00:58.600193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 23:00:58.613896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 23:00:58.614021       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 23:00:58.617116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:02.072029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:06.332729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:09.932413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:12.987839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:16.011529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:16.025021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:16.025176       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 23:01:16.027785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-841969_382bc7ad-3351-4cd9-bc9e-b4997c7103c2!
	I1119 23:01:16.028733       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"436ac8e0-d182-4d76-a461-e7e8abb5704d", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-841969_382bc7ad-3351-4cd9-bc9e-b4997c7103c2 became leader
	W1119 23:01:16.029504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:16.042615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 23:01:16.128002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-841969_382bc7ad-3351-4cd9-bc9e-b4997c7103c2!
	W1119 23:01:18.047612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 23:01:18.075486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969: exit status 2 (587.046336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-841969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-467060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1119 23:02:00.142964  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-467060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (627.764032ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:02:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-467060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-467060
helpers_test.go:243: (dbg) docker inspect newest-cni-467060:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293",
	        "Created": "2025-11-19T23:01:19.724337429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1080582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T23:01:19.807966676Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/hostname",
	        "HostsPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/hosts",
	        "LogPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293-json.log",
	        "Name": "/newest-cni-467060",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-467060:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-467060",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293",
	                "LowerDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-467060",
	                "Source": "/var/lib/docker/volumes/newest-cni-467060/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-467060",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-467060",
	                "name.minikube.sigs.k8s.io": "newest-cni-467060",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ffbf116aea2ffe96994b0454a0e05d6c92f5ef549d3ee4d693195b502aa4193f",
	            "SandboxKey": "/var/run/docker/netns/ffbf116aea2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-467060": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:71:49:15:17:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5606ddc27f5d747e0e03f70d6f5351c9e8418cffd201d9b7b94a06728f9f0e86",
	                    "EndpointID": "dbece0e3337bbefbaa7a0883b87bd2008828efb3e52b844137d1c79ebfc63715",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-467060",
	                        "373502afc116"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-467060 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-467060 logs -n 25: (1.449378759s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-018508 image list --format=json                                                                                                                                                                                                    │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ pause   │ -p no-preload-018508 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │                     │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p no-preload-018508                                                                                                                                                                                                                          │ no-preload-018508            │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-553369                                                                                                                                                                                                               │ disable-driver-mounts-553369 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-841969 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ embed-certs-044665 image list --format=json                                                                                                                                                                                                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p embed-certs-044665 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-841969 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-841969 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-841969                                                                                                                                                                                                               │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p default-k8s-diff-port-841969                                                                                                                                                                                                               │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p auto-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-334366                  │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-467060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:01:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:01:24.215511 1081671 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:01:24.215652 1081671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:24.215664 1081671 out.go:374] Setting ErrFile to fd 2...
	I1119 23:01:24.215669 1081671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:01:24.216084 1081671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:01:24.216602 1081671 out.go:368] Setting JSON to false
	I1119 23:01:24.217744 1081671 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17013,"bootTime":1763576271,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 23:01:24.217825 1081671 start.go:143] virtualization:  
	I1119 23:01:24.221723 1081671 out.go:179] * [auto-334366] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 23:01:24.225823 1081671 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:01:24.225893 1081671 notify.go:221] Checking for updates...
	I1119 23:01:24.232027 1081671 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:01:24.235257 1081671 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:01:24.238148 1081671 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 23:01:24.241135 1081671 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 23:01:24.244305 1081671 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:01:24.247931 1081671 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:24.248040 1081671 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:01:24.279884 1081671 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 23:01:24.280033 1081671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:24.345980 1081671 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 23:01:24.335887918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:24.346085 1081671 docker.go:319] overlay module found
	I1119 23:01:24.349343 1081671 out.go:179] * Using the docker driver based on user configuration
	I1119 23:01:24.352389 1081671 start.go:309] selected driver: docker
	I1119 23:01:24.352417 1081671 start.go:930] validating driver "docker" against <nil>
	I1119 23:01:24.352431 1081671 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:01:24.353174 1081671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:01:24.413495 1081671 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 23:01:24.398574877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:01:24.413677 1081671 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 23:01:24.413961 1081671 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:01:24.416973 1081671 out.go:179] * Using Docker driver with root privileges
	I1119 23:01:24.419817 1081671 cni.go:84] Creating CNI manager for ""
	I1119 23:01:24.419893 1081671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:01:24.419912 1081671 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 23:01:24.420004 1081671 start.go:353] cluster config:
	{Name:auto-334366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-334366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1119 23:01:24.423124 1081671 out.go:179] * Starting "auto-334366" primary control-plane node in "auto-334366" cluster
	I1119 23:01:24.426059 1081671 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 23:01:24.428977 1081671 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 23:01:24.431866 1081671 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:01:24.431917 1081671 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 23:01:24.431928 1081671 cache.go:65] Caching tarball of preloaded images
	I1119 23:01:24.431951 1081671 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 23:01:24.432018 1081671 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 23:01:24.432029 1081671 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:01:24.432137 1081671 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/config.json ...
	I1119 23:01:24.432155 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/config.json: {Name:mkf297030b5f20e450d5649969a32a3c80efd35c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:24.451815 1081671 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 23:01:24.451838 1081671 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 23:01:24.451857 1081671 cache.go:243] Successfully downloaded all kic artifacts
	I1119 23:01:24.451881 1081671 start.go:360] acquireMachinesLock for auto-334366: {Name:mk6d28a259640322700e3481ce3eee3e377c42eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:01:24.451986 1081671 start.go:364] duration metric: took 84.694µs to acquireMachinesLock for "auto-334366"
	I1119 23:01:24.452017 1081671 start.go:93] Provisioning new machine with config: &{Name:auto-334366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-334366 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:01:24.452109 1081671 start.go:125] createHost starting for "" (driver="docker")
	I1119 23:01:24.912161 1079225 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-467060
	
	I1119 23:01:24.912192 1079225 ubuntu.go:182] provisioning hostname "newest-cni-467060"
	I1119 23:01:24.912268 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:24.939416 1079225 main.go:143] libmachine: Using SSH client type: native
	I1119 23:01:24.940034 1079225 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1119 23:01:24.940064 1079225 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-467060 && echo "newest-cni-467060" | sudo tee /etc/hostname
	I1119 23:01:25.144519 1079225 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-467060
	
	I1119 23:01:25.144599 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:25.169651 1079225 main.go:143] libmachine: Using SSH client type: native
	I1119 23:01:25.169953 1079225 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1119 23:01:25.169980 1079225 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-467060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-467060/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-467060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:01:25.335080 1079225 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:01:25.335111 1079225 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 23:01:25.335138 1079225 ubuntu.go:190] setting up certificates
	I1119 23:01:25.335149 1079225 provision.go:84] configureAuth start
	I1119 23:01:25.335222 1079225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:01:25.362454 1079225 provision.go:143] copyHostCerts
	I1119 23:01:25.362516 1079225 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 23:01:25.362528 1079225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 23:01:25.362604 1079225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 23:01:25.362718 1079225 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 23:01:25.362730 1079225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 23:01:25.362759 1079225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 23:01:25.362831 1079225 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 23:01:25.362841 1079225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 23:01:25.362883 1079225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 23:01:25.362957 1079225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.newest-cni-467060 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-467060]
	I1119 23:01:25.478365 1079225 provision.go:177] copyRemoteCerts
	I1119 23:01:25.478615 1079225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:01:25.478712 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:25.502363 1079225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:01:25.607842 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:01:25.630841 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:01:25.651334 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 23:01:25.671279 1079225 provision.go:87] duration metric: took 336.099671ms to configureAuth
	I1119 23:01:25.671308 1079225 ubuntu.go:206] setting minikube options for container-runtime
	I1119 23:01:25.671493 1079225 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:25.671600 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:25.696731 1079225 main.go:143] libmachine: Using SSH client type: native
	I1119 23:01:25.697056 1079225 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1119 23:01:25.697079 1079225 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:01:26.114275 1079225 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:01:26.114328 1079225 machine.go:97] duration metric: took 4.398623988s to provisionDockerMachine
	I1119 23:01:26.114340 1079225 client.go:176] duration metric: took 14.128136757s to LocalClient.Create
	I1119 23:01:26.114353 1079225 start.go:167] duration metric: took 14.128203769s to libmachine.API.Create "newest-cni-467060"
	I1119 23:01:26.114363 1079225 start.go:293] postStartSetup for "newest-cni-467060" (driver="docker")
	I1119 23:01:26.114388 1079225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:01:26.114496 1079225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:01:26.114551 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:26.138677 1079225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:01:26.243942 1079225 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:01:26.248057 1079225 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 23:01:26.248082 1079225 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 23:01:26.248094 1079225 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 23:01:26.248146 1079225 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 23:01:26.248224 1079225 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 23:01:26.248325 1079225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:01:26.256823 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:01:26.277675 1079225 start.go:296] duration metric: took 163.281767ms for postStartSetup
	I1119 23:01:26.278087 1079225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:01:26.296420 1079225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json ...
	I1119 23:01:26.296675 1079225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:01:26.296725 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:26.323267 1079225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:01:26.427417 1079225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 23:01:26.435642 1079225 start.go:128] duration metric: took 14.453230618s to createHost
	I1119 23:01:26.435670 1079225 start.go:83] releasing machines lock for "newest-cni-467060", held for 14.453383686s
	I1119 23:01:26.435764 1079225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:01:26.452769 1079225 ssh_runner.go:195] Run: cat /version.json
	I1119 23:01:26.452933 1079225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:01:26.453073 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:26.453247 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:26.487757 1079225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:01:26.497982 1079225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:01:26.720816 1079225 ssh_runner.go:195] Run: systemctl --version
	I1119 23:01:26.727587 1079225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:01:24.457419 1081671 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 23:01:24.457663 1081671 start.go:159] libmachine.API.Create for "auto-334366" (driver="docker")
	I1119 23:01:24.457702 1081671 client.go:173] LocalClient.Create starting
	I1119 23:01:24.457787 1081671 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem
	I1119 23:01:24.457823 1081671 main.go:143] libmachine: Decoding PEM data...
	I1119 23:01:24.457844 1081671 main.go:143] libmachine: Parsing certificate...
	I1119 23:01:24.457900 1081671 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem
	I1119 23:01:24.457925 1081671 main.go:143] libmachine: Decoding PEM data...
	I1119 23:01:24.457938 1081671 main.go:143] libmachine: Parsing certificate...
	I1119 23:01:24.458299 1081671 cli_runner.go:164] Run: docker network inspect auto-334366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 23:01:24.478972 1081671 cli_runner.go:211] docker network inspect auto-334366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 23:01:24.479059 1081671 network_create.go:284] running [docker network inspect auto-334366] to gather additional debugging logs...
	I1119 23:01:24.479081 1081671 cli_runner.go:164] Run: docker network inspect auto-334366
	W1119 23:01:24.495076 1081671 cli_runner.go:211] docker network inspect auto-334366 returned with exit code 1
	I1119 23:01:24.495106 1081671 network_create.go:287] error running [docker network inspect auto-334366]: docker network inspect auto-334366: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-334366 not found
	I1119 23:01:24.495120 1081671 network_create.go:289] output of [docker network inspect auto-334366]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-334366 not found
	
	** /stderr **
	I1119 23:01:24.495224 1081671 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:01:24.510586 1081671 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
	I1119 23:01:24.511011 1081671 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-409f9deb7199 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:64:cf:3b:93:91} reservation:<nil>}
	I1119 23:01:24.511383 1081671 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-601de6a5616d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:01:2f:20:8b:a3} reservation:<nil>}
	I1119 23:01:24.511644 1081671 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5606ddc27f5d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:2f:d5:80:da:26} reservation:<nil>}
	I1119 23:01:24.512076 1081671 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019eca90}
	I1119 23:01:24.512094 1081671 network_create.go:124] attempt to create docker network auto-334366 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 23:01:24.512153 1081671 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-334366 auto-334366
	I1119 23:01:24.565618 1081671 network_create.go:108] docker network auto-334366 192.168.85.0/24 created
	I1119 23:01:24.565665 1081671 kic.go:121] calculated static IP "192.168.85.2" for the "auto-334366" container
	I1119 23:01:24.565756 1081671 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 23:01:24.590461 1081671 cli_runner.go:164] Run: docker volume create auto-334366 --label name.minikube.sigs.k8s.io=auto-334366 --label created_by.minikube.sigs.k8s.io=true
	I1119 23:01:24.608362 1081671 oci.go:103] Successfully created a docker volume auto-334366
	I1119 23:01:24.608447 1081671 cli_runner.go:164] Run: docker run --rm --name auto-334366-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-334366 --entrypoint /usr/bin/test -v auto-334366:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 23:01:25.243164 1081671 oci.go:107] Successfully prepared a docker volume auto-334366
	I1119 23:01:25.243230 1081671 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:01:25.243240 1081671 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 23:01:25.243301 1081671 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-334366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 23:01:26.768884 1079225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:01:26.773589 1079225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:01:26.773710 1079225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:01:26.813493 1079225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 23:01:26.813568 1079225 start.go:496] detecting cgroup driver to use...
	I1119 23:01:26.813613 1079225 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 23:01:26.813690 1079225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:01:26.833989 1079225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:01:26.848455 1079225 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:01:26.848570 1079225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:01:26.867515 1079225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:01:26.888003 1079225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:01:27.048115 1079225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:01:27.230540 1079225 docker.go:234] disabling docker service ...
	I1119 23:01:27.230661 1079225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:01:27.268820 1079225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:01:27.285668 1079225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:01:27.453309 1079225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:01:27.602704 1079225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:01:27.618716 1079225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:01:27.634772 1079225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:01:27.634942 1079225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:27.644750 1079225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:01:27.644872 1079225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:27.654219 1079225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:27.663590 1079225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:27.673330 1079225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:01:27.681896 1079225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:27.692617 1079225 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:27.707094 1079225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:27.717348 1079225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:01:27.730252 1079225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:01:27.741382 1079225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:01:27.946474 1079225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:01:30.334916 1079225 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.388354619s)
	I1119 23:01:30.334940 1079225 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:01:30.334992 1079225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:01:30.341277 1079225 start.go:564] Will wait 60s for crictl version
	I1119 23:01:30.341353 1079225 ssh_runner.go:195] Run: which crictl
	I1119 23:01:30.347011 1079225 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 23:01:30.379466 1079225 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 23:01:30.379555 1079225 ssh_runner.go:195] Run: crio --version
	I1119 23:01:30.438570 1079225 ssh_runner.go:195] Run: crio --version
	I1119 23:01:30.483464 1079225 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 23:01:30.486983 1079225 cli_runner.go:164] Run: docker network inspect newest-cni-467060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:01:30.505693 1079225 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 23:01:30.509977 1079225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:01:30.524517 1079225 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 23:01:30.527371 1079225 kubeadm.go:884] updating cluster {Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:01:30.527525 1079225 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:01:30.527598 1079225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:01:30.566588 1079225 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:01:30.566617 1079225 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:01:30.566686 1079225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:01:30.622030 1079225 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:01:30.622050 1079225 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:01:30.622058 1079225 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 23:01:30.622150 1079225 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-467060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:01:30.622247 1079225 ssh_runner.go:195] Run: crio config
	I1119 23:01:30.785877 1079225 cni.go:84] Creating CNI manager for ""
	I1119 23:01:30.785903 1079225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:01:30.785926 1079225 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 23:01:30.785955 1079225 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-467060 NodeName:newest-cni-467060 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:01:30.786111 1079225 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-467060"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:01:30.786197 1079225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:01:30.804165 1079225 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:01:30.804250 1079225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 23:01:30.817893 1079225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 23:01:30.841874 1079225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:01:30.868600 1079225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1119 23:01:30.887497 1079225 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 23:01:30.893498 1079225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:01:30.906695 1079225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:01:31.156458 1079225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:01:31.181908 1079225 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060 for IP: 192.168.76.2
	I1119 23:01:31.181928 1079225 certs.go:195] generating shared ca certs ...
	I1119 23:01:31.181944 1079225 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:31.182107 1079225 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 23:01:31.182154 1079225 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 23:01:31.182161 1079225 certs.go:257] generating profile certs ...
	I1119 23:01:31.182228 1079225 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.key
	I1119 23:01:31.182239 1079225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.crt with IP's: []
	I1119 23:01:30.146407 1081671 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-334366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.903068144s)
	I1119 23:01:30.146440 1081671 kic.go:203] duration metric: took 4.903195858s to extract preloaded images to volume ...
	W1119 23:01:30.146609 1081671 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 23:01:30.146770 1081671 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 23:01:30.249950 1081671 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-334366 --name auto-334366 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-334366 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-334366 --network auto-334366 --ip 192.168.85.2 --volume auto-334366:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 23:01:30.602775 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Running}}
	I1119 23:01:30.630969 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:01:30.660388 1081671 cli_runner.go:164] Run: docker exec auto-334366 stat /var/lib/dpkg/alternatives/iptables
	I1119 23:01:30.721834 1081671 oci.go:144] the created container "auto-334366" has a running status.
	I1119 23:01:30.721865 1081671 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa...
	I1119 23:01:30.942794 1081671 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 23:01:30.968760 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:01:31.003948 1081671 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 23:01:31.003971 1081671 kic_runner.go:114] Args: [docker exec --privileged auto-334366 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 23:01:31.077505 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:01:31.107917 1081671 machine.go:94] provisionDockerMachine start ...
	I1119 23:01:31.108006 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:31.136599 1081671 main.go:143] libmachine: Using SSH client type: native
	I1119 23:01:31.136933 1081671 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33886 <nil> <nil>}
	I1119 23:01:31.136944 1081671 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:01:31.137751 1081671 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 23:01:32.184203 1079225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.crt ...
	I1119 23:01:32.184235 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.crt: {Name:mka3bea1f68912a5dadfa465dd4eaeb0dad6735e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:32.184435 1079225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.key ...
	I1119 23:01:32.184449 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.key: {Name:mk14e56e9e90aeaacf8da3cf4ef7356c28d7afd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:32.184548 1079225 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key.08ecb68a
	I1119 23:01:32.184568 1079225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt.08ecb68a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 23:01:33.055333 1079225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt.08ecb68a ...
	I1119 23:01:33.055372 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt.08ecb68a: {Name:mk3789dd1fc2965f340977b88367e9c929cb2bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:33.055581 1079225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key.08ecb68a ...
	I1119 23:01:33.055599 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key.08ecb68a: {Name:mkec0f5dd321c6981d6f586e8b583cec84d6a3c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:33.055697 1079225 certs.go:382] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt.08ecb68a -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt
	I1119 23:01:33.055779 1079225 certs.go:386] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key.08ecb68a -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key
	I1119 23:01:33.055848 1079225 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key
	I1119 23:01:33.055869 1079225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.crt with IP's: []
	I1119 23:01:33.463491 1079225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.crt ...
	I1119 23:01:33.463527 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.crt: {Name:mk4daeee74ab771efef547c18d057ddb06b7adea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:33.463785 1079225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key ...
	I1119 23:01:33.463803 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key: {Name:mk551f2060249744ace34b786e5d2d9a2b2bb5a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:33.464020 1079225 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 23:01:33.464066 1079225 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 23:01:33.464090 1079225 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 23:01:33.464128 1079225 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:01:33.464162 1079225 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:01:33.464192 1079225 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 23:01:33.464266 1079225 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:01:33.464925 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:01:33.485382 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:01:33.505612 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:01:33.529099 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 23:01:33.548840 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 23:01:33.567982 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 23:01:33.587081 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:01:33.606336 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 23:01:33.624733 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 23:01:33.642846 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:01:33.661000 1079225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 23:01:33.679383 1079225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:01:33.692442 1079225 ssh_runner.go:195] Run: openssl version
	I1119 23:01:33.700660 1079225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 23:01:33.710059 1079225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 23:01:33.713862 1079225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 23:01:33.713975 1079225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 23:01:33.754883 1079225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:01:33.763561 1079225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:01:33.772219 1079225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:01:33.776073 1079225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:01:33.776185 1079225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:01:33.818837 1079225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:01:33.827383 1079225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 23:01:33.835648 1079225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 23:01:33.839551 1079225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 23:01:33.839619 1079225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 23:01:33.880412 1079225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 23:01:33.889024 1079225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:01:33.892454 1079225 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 23:01:33.892507 1079225 kubeadm.go:401] StartCluster: {Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:01:33.892583 1079225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:01:33.892657 1079225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:01:33.919150 1079225 cri.go:89] found id: ""
	I1119 23:01:33.919236 1079225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:01:33.927499 1079225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 23:01:33.935359 1079225 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 23:01:33.935427 1079225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 23:01:33.943465 1079225 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 23:01:33.943487 1079225 kubeadm.go:158] found existing configuration files:
	
	I1119 23:01:33.943552 1079225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 23:01:33.951558 1079225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 23:01:33.951627 1079225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 23:01:33.959156 1079225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 23:01:33.966899 1079225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 23:01:33.966974 1079225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 23:01:33.974471 1079225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 23:01:33.982502 1079225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 23:01:33.982586 1079225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 23:01:33.990513 1079225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 23:01:33.998048 1079225 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 23:01:33.998115 1079225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 23:01:34.007841 1079225 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 23:01:34.073929 1079225 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 23:01:34.074177 1079225 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 23:01:34.147554 1079225 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 23:01:34.290919 1081671 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-334366
	
	I1119 23:01:34.290940 1081671 ubuntu.go:182] provisioning hostname "auto-334366"
	I1119 23:01:34.291007 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:34.309593 1081671 main.go:143] libmachine: Using SSH client type: native
	I1119 23:01:34.309895 1081671 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33886 <nil> <nil>}
	I1119 23:01:34.309905 1081671 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-334366 && echo "auto-334366" | sudo tee /etc/hostname
	I1119 23:01:34.480991 1081671 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-334366
	
	I1119 23:01:34.481154 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:34.512983 1081671 main.go:143] libmachine: Using SSH client type: native
	I1119 23:01:34.513355 1081671 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33886 <nil> <nil>}
	I1119 23:01:34.513377 1081671 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-334366' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-334366/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-334366' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:01:34.683877 1081671 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:01:34.683968 1081671 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 23:01:34.684012 1081671 ubuntu.go:190] setting up certificates
	I1119 23:01:34.684050 1081671 provision.go:84] configureAuth start
	I1119 23:01:34.684171 1081671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-334366
	I1119 23:01:34.712878 1081671 provision.go:143] copyHostCerts
	I1119 23:01:34.712955 1081671 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 23:01:34.712965 1081671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 23:01:34.713043 1081671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 23:01:34.713146 1081671 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 23:01:34.713151 1081671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 23:01:34.713177 1081671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 23:01:34.713252 1081671 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 23:01:34.713258 1081671 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 23:01:34.713284 1081671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 23:01:34.713337 1081671 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.auto-334366 san=[127.0.0.1 192.168.85.2 auto-334366 localhost minikube]
	I1119 23:01:35.762963 1081671 provision.go:177] copyRemoteCerts
	I1119 23:01:35.763086 1081671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:01:35.763156 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:35.781260 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:01:35.894918 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:01:35.923914 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1119 23:01:35.954726 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:01:35.984081 1081671 provision.go:87] duration metric: took 1.299990602s to configureAuth
	I1119 23:01:35.984173 1081671 ubuntu.go:206] setting minikube options for container-runtime
	I1119 23:01:35.984523 1081671 config.go:182] Loaded profile config "auto-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:35.984738 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:36.017564 1081671 main.go:143] libmachine: Using SSH client type: native
	I1119 23:01:36.018151 1081671 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33886 <nil> <nil>}
	I1119 23:01:36.018193 1081671 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:01:36.448993 1081671 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:01:36.449019 1081671 machine.go:97] duration metric: took 5.341082595s to provisionDockerMachine
	I1119 23:01:36.449029 1081671 client.go:176] duration metric: took 11.991315698s to LocalClient.Create
	I1119 23:01:36.449043 1081671 start.go:167] duration metric: took 11.991382366s to libmachine.API.Create "auto-334366"
	I1119 23:01:36.449056 1081671 start.go:293] postStartSetup for "auto-334366" (driver="docker")
	I1119 23:01:36.449071 1081671 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:01:36.449158 1081671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:01:36.449210 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:36.472112 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:01:36.581182 1081671 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:01:36.585180 1081671 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 23:01:36.585205 1081671 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 23:01:36.585215 1081671 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 23:01:36.585273 1081671 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 23:01:36.585349 1081671 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 23:01:36.585446 1081671 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:01:36.593767 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:01:36.613242 1081671 start.go:296] duration metric: took 164.133391ms for postStartSetup
	I1119 23:01:36.613677 1081671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-334366
	I1119 23:01:36.635817 1081671 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/config.json ...
	I1119 23:01:36.636095 1081671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:01:36.636138 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:36.656870 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:01:36.768304 1081671 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 23:01:36.776855 1081671 start.go:128] duration metric: took 12.324729469s to createHost
	I1119 23:01:36.776876 1081671 start.go:83] releasing machines lock for "auto-334366", held for 12.324877022s
	I1119 23:01:36.776944 1081671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-334366
	I1119 23:01:36.824870 1081671 ssh_runner.go:195] Run: cat /version.json
	I1119 23:01:36.824920 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:36.825152 1081671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:01:36.825241 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:01:36.856453 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:01:36.868650 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:01:37.070176 1081671 ssh_runner.go:195] Run: systemctl --version
	I1119 23:01:37.078379 1081671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:01:37.137600 1081671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:01:37.142465 1081671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:01:37.142584 1081671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:01:37.172867 1081671 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 23:01:37.172942 1081671 start.go:496] detecting cgroup driver to use...
	I1119 23:01:37.173011 1081671 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 23:01:37.173089 1081671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:01:37.196012 1081671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:01:37.211194 1081671 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:01:37.211310 1081671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:01:37.229069 1081671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:01:37.249253 1081671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:01:37.423174 1081671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:01:37.615955 1081671 docker.go:234] disabling docker service ...
	I1119 23:01:37.616075 1081671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:01:37.652710 1081671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:01:37.671596 1081671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:01:37.831772 1081671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:01:37.973999 1081671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:01:37.991706 1081671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:01:38.008895 1081671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:01:38.008976 1081671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:38.020079 1081671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:01:38.020205 1081671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:38.031178 1081671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:38.041611 1081671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:38.053190 1081671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:01:38.063100 1081671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:38.073650 1081671 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:38.089789 1081671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:01:38.099703 1081671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:01:38.108465 1081671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:01:38.116699 1081671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:01:38.271264 1081671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:01:38.503487 1081671 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:01:38.503611 1081671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:01:38.508190 1081671 start.go:564] Will wait 60s for crictl version
	I1119 23:01:38.508335 1081671 ssh_runner.go:195] Run: which crictl
	I1119 23:01:38.513599 1081671 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 23:01:38.550462 1081671 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 23:01:38.550620 1081671 ssh_runner.go:195] Run: crio --version
	I1119 23:01:38.615452 1081671 ssh_runner.go:195] Run: crio --version
	I1119 23:01:38.656460 1081671 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 23:01:38.659315 1081671 cli_runner.go:164] Run: docker network inspect auto-334366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:01:38.680621 1081671 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 23:01:38.684868 1081671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:01:38.694410 1081671 kubeadm.go:884] updating cluster {Name:auto-334366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-334366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:01:38.694527 1081671 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:01:38.694591 1081671 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:01:38.738964 1081671 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:01:38.738985 1081671 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:01:38.739041 1081671 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:01:38.772861 1081671 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:01:38.772937 1081671 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:01:38.772959 1081671 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 23:01:38.773083 1081671 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-334366 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-334366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:01:38.773199 1081671 ssh_runner.go:195] Run: crio config
	I1119 23:01:38.844001 1081671 cni.go:84] Creating CNI manager for ""
	I1119 23:01:38.844073 1081671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:01:38.844120 1081671 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:01:38.844163 1081671 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-334366 NodeName:auto-334366 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:01:38.844423 1081671 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-334366"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:01:38.844554 1081671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:01:38.853057 1081671 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:01:38.853194 1081671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 23:01:38.861413 1081671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1119 23:01:38.875425 1081671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:01:38.889608 1081671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1119 23:01:38.903414 1081671 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 23:01:38.906942 1081671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:01:38.917314 1081671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:01:39.070512 1081671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:01:39.092721 1081671 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366 for IP: 192.168.85.2
	I1119 23:01:39.092744 1081671 certs.go:195] generating shared ca certs ...
	I1119 23:01:39.092761 1081671 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:39.092927 1081671 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 23:01:39.093001 1081671 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 23:01:39.093017 1081671 certs.go:257] generating profile certs ...
	I1119 23:01:39.093096 1081671 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.key
	I1119 23:01:39.093113 1081671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt with IP's: []
	I1119 23:01:40.391341 1081671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt ...
	I1119 23:01:40.391457 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: {Name:mk5abc5ceabf4f0fb64b31e4068c1e9a2f69872a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:40.391694 1081671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.key ...
	I1119 23:01:40.391736 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.key: {Name:mk64784fd33827592ad37efe24e53694512590dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:40.391888 1081671 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.key.51750dc5
	I1119 23:01:40.391935 1081671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.crt.51750dc5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 23:01:40.834606 1081671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.crt.51750dc5 ...
	I1119 23:01:40.834642 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.crt.51750dc5: {Name:mkdb1fdd8113249b056c967cce03745d068c5e07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:40.834843 1081671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.key.51750dc5 ...
	I1119 23:01:40.834860 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.key.51750dc5: {Name:mk5d89397716abd6f679d0b62418ad4bcdb20548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:40.834978 1081671 certs.go:382] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.crt.51750dc5 -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.crt
	I1119 23:01:40.835060 1081671 certs.go:386] copying /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.key.51750dc5 -> /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.key
	I1119 23:01:40.835125 1081671 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.key
	I1119 23:01:40.835146 1081671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.crt with IP's: []
	I1119 23:01:41.542508 1081671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.crt ...
	I1119 23:01:41.542599 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.crt: {Name:mkbb68489b6dc662e40f931c9e7f5711a31f01fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:41.542823 1081671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.key ...
	I1119 23:01:41.542885 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.key: {Name:mk6ac4f76638822733c483c0bda2d7dcb827fce6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:41.543123 1081671 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 23:01:41.543195 1081671 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 23:01:41.543234 1081671 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 23:01:41.543289 1081671 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:01:41.543353 1081671 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:01:41.543401 1081671 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 23:01:41.543492 1081671 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:01:41.544106 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:01:41.569761 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:01:41.608021 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:01:41.632588 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 23:01:41.653576 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1119 23:01:41.687834 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:01:41.717689 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:01:41.750818 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:01:41.788129 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 23:01:41.823533 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:01:41.851746 1081671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 23:01:41.881328 1081671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:01:41.903339 1081671 ssh_runner.go:195] Run: openssl version
	I1119 23:01:41.910213 1081671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:01:41.920086 1081671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:01:41.927550 1081671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:01:41.927668 1081671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:01:41.992385 1081671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:01:42.000999 1081671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 23:01:42.012503 1081671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 23:01:42.019956 1081671 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 23:01:42.020112 1081671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 23:01:42.085397 1081671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 23:01:42.100168 1081671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 23:01:42.116981 1081671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 23:01:42.127966 1081671 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 23:01:42.128156 1081671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 23:01:42.208244 1081671 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:01:42.219486 1081671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:01:42.228280 1081671 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 23:01:42.228409 1081671 kubeadm.go:401] StartCluster: {Name:auto-334366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-334366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:01:42.228533 1081671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:01:42.228704 1081671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:01:42.277474 1081671 cri.go:89] found id: ""
	I1119 23:01:42.277638 1081671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:01:42.292983 1081671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 23:01:42.306614 1081671 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 23:01:42.306770 1081671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 23:01:42.322961 1081671 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 23:01:42.323035 1081671 kubeadm.go:158] found existing configuration files:
	
	I1119 23:01:42.323129 1081671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 23:01:42.332353 1081671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 23:01:42.332476 1081671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 23:01:42.347370 1081671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 23:01:42.356209 1081671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 23:01:42.356325 1081671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 23:01:42.371458 1081671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 23:01:42.389152 1081671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 23:01:42.389268 1081671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 23:01:42.408125 1081671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 23:01:42.420156 1081671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 23:01:42.420273 1081671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 23:01:42.432873 1081671 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 23:01:42.499890 1081671 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 23:01:42.500204 1081671 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 23:01:42.543770 1081671 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 23:01:42.544149 1081671 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 23:01:42.544238 1081671 kubeadm.go:319] OS: Linux
	I1119 23:01:42.544324 1081671 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 23:01:42.544414 1081671 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 23:01:42.544516 1081671 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 23:01:42.544579 1081671 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 23:01:42.544634 1081671 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 23:01:42.544688 1081671 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 23:01:42.544739 1081671 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 23:01:42.544792 1081671 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 23:01:42.544844 1081671 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 23:01:42.663299 1081671 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 23:01:42.663481 1081671 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 23:01:42.663657 1081671 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 23:01:42.677598 1081671 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 23:01:42.683526 1081671 out.go:252]   - Generating certificates and keys ...
	I1119 23:01:42.683703 1081671 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 23:01:42.683827 1081671 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 23:01:42.848636 1081671 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 23:01:43.739206 1081671 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 23:01:44.066410 1081671 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 23:01:44.957857 1081671 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 23:01:45.943523 1081671 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 23:01:45.943810 1081671 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-334366 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 23:01:46.557831 1081671 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 23:01:46.558126 1081671 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-334366 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 23:01:47.413676 1081671 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 23:01:47.794926 1081671 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 23:01:49.729424 1081671 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 23:01:49.729666 1081671 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 23:01:50.102008 1081671 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 23:01:50.311605 1081671 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 23:01:50.684717 1081671 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 23:01:51.530453 1081671 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 23:01:52.137755 1081671 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 23:01:52.137864 1081671 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 23:01:52.140353 1081671 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 23:01:52.272094 1079225 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 23:01:52.272153 1079225 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 23:01:52.272275 1079225 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 23:01:52.272349 1079225 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 23:01:52.272392 1079225 kubeadm.go:319] OS: Linux
	I1119 23:01:52.272444 1079225 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 23:01:52.272499 1079225 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 23:01:52.272552 1079225 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 23:01:52.272605 1079225 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 23:01:52.272660 1079225 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 23:01:52.272715 1079225 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 23:01:52.272766 1079225 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 23:01:52.272820 1079225 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 23:01:52.272872 1079225 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 23:01:52.272951 1079225 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 23:01:52.273061 1079225 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 23:01:52.273159 1079225 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 23:01:52.273228 1079225 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 23:01:52.276480 1079225 out.go:252]   - Generating certificates and keys ...
	I1119 23:01:52.276582 1079225 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 23:01:52.276664 1079225 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 23:01:52.276742 1079225 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 23:01:52.276816 1079225 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 23:01:52.276886 1079225 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 23:01:52.276943 1079225 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 23:01:52.277003 1079225 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 23:01:52.277133 1079225 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-467060] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 23:01:52.277193 1079225 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 23:01:52.277321 1079225 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-467060] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 23:01:52.277402 1079225 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 23:01:52.277472 1079225 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 23:01:52.277530 1079225 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 23:01:52.277594 1079225 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 23:01:52.277652 1079225 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 23:01:52.277715 1079225 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 23:01:52.277775 1079225 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 23:01:52.277865 1079225 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 23:01:52.277928 1079225 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 23:01:52.278017 1079225 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 23:01:52.278091 1079225 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 23:01:52.281102 1079225 out.go:252]   - Booting up control plane ...
	I1119 23:01:52.281244 1079225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 23:01:52.281342 1079225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 23:01:52.281422 1079225 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 23:01:52.281543 1079225 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 23:01:52.281651 1079225 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 23:01:52.281772 1079225 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 23:01:52.281870 1079225 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 23:01:52.281917 1079225 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 23:01:52.282067 1079225 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 23:01:52.282188 1079225 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 23:01:52.282258 1079225 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501694935s
	I1119 23:01:52.282364 1079225 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 23:01:52.282458 1079225 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 23:01:52.282562 1079225 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 23:01:52.282662 1079225 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 23:01:52.282750 1079225 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.121122065s
	I1119 23:01:52.282832 1079225 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.965088005s
	I1119 23:01:52.282926 1079225 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.004382008s
	I1119 23:01:52.283049 1079225 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 23:01:52.283195 1079225 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 23:01:52.283276 1079225 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 23:01:52.283489 1079225 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-467060 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 23:01:52.283556 1079225 kubeadm.go:319] [bootstrap-token] Using token: gn0alp.6h8gser507tf55gq
	I1119 23:01:52.288452 1079225 out.go:252]   - Configuring RBAC rules ...
	I1119 23:01:52.288607 1079225 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 23:01:52.288712 1079225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 23:01:52.288894 1079225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 23:01:52.289052 1079225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 23:01:52.289211 1079225 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 23:01:52.289321 1079225 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 23:01:52.289468 1079225 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 23:01:52.289528 1079225 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 23:01:52.289591 1079225 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 23:01:52.289600 1079225 kubeadm.go:319] 
	I1119 23:01:52.289688 1079225 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 23:01:52.289698 1079225 kubeadm.go:319] 
	I1119 23:01:52.289792 1079225 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 23:01:52.289806 1079225 kubeadm.go:319] 
	I1119 23:01:52.289835 1079225 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 23:01:52.289912 1079225 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 23:01:52.289979 1079225 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 23:01:52.289988 1079225 kubeadm.go:319] 
	I1119 23:01:52.290058 1079225 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 23:01:52.290069 1079225 kubeadm.go:319] 
	I1119 23:01:52.290129 1079225 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 23:01:52.290139 1079225 kubeadm.go:319] 
	I1119 23:01:52.290202 1079225 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 23:01:52.290296 1079225 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 23:01:52.290390 1079225 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 23:01:52.290399 1079225 kubeadm.go:319] 
	I1119 23:01:52.290498 1079225 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 23:01:52.290594 1079225 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 23:01:52.290615 1079225 kubeadm.go:319] 
	I1119 23:01:52.290728 1079225 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gn0alp.6h8gser507tf55gq \
	I1119 23:01:52.290879 1079225 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 \
	I1119 23:01:52.290915 1079225 kubeadm.go:319] 	--control-plane 
	I1119 23:01:52.290924 1079225 kubeadm.go:319] 
	I1119 23:01:52.291031 1079225 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 23:01:52.291040 1079225 kubeadm.go:319] 
	I1119 23:01:52.291134 1079225 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gn0alp.6h8gser507tf55gq \
	I1119 23:01:52.291263 1079225 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a427555dc11b146e6b21248d593e73d24115f2afad72289cc3f2ae17d0d2f2b1 
	I1119 23:01:52.291289 1079225 cni.go:84] Creating CNI manager for ""
	I1119 23:01:52.291309 1079225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:01:52.296187 1079225 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 23:01:52.143061 1081671 out.go:252]   - Booting up control plane ...
	I1119 23:01:52.143173 1081671 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 23:01:52.143260 1081671 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 23:01:52.145682 1081671 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 23:01:52.182246 1081671 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 23:01:52.182408 1081671 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 23:01:52.197243 1081671 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 23:01:52.198075 1081671 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 23:01:52.198533 1081671 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 23:01:52.386787 1081671 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 23:01:52.386955 1081671 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 23:01:53.887636 1081671 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500924509s
	I1119 23:01:53.891313 1081671 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 23:01:53.891416 1081671 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 23:01:53.891515 1081671 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 23:01:53.891602 1081671 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 23:01:52.299099 1079225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 23:01:52.303304 1079225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 23:01:52.303361 1079225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 23:01:52.316863 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 23:01:52.716744 1079225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 23:01:52.716897 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:52.716975 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-467060 minikube.k8s.io/updated_at=2025_11_19T23_01_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=newest-cni-467060 minikube.k8s.io/primary=true
	I1119 23:01:53.059806 1079225 ops.go:34] apiserver oom_adj: -16
	I1119 23:01:53.059922 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:53.560884 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:54.060074 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:54.560903 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:55.060646 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:55.560040 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:56.060137 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:56.559971 1079225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:01:57.111748 1079225 kubeadm.go:1114] duration metric: took 4.394920295s to wait for elevateKubeSystemPrivileges
	I1119 23:01:57.111774 1079225 kubeadm.go:403] duration metric: took 23.219271086s to StartCluster
	I1119 23:01:57.111790 1079225 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:57.111855 1079225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:01:57.112497 1079225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:01:57.112737 1079225 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:01:57.112890 1079225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 23:01:57.113151 1079225 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:01:57.113188 1079225 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:01:57.113245 1079225 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-467060"
	I1119 23:01:57.113264 1079225 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-467060"
	I1119 23:01:57.113285 1079225 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:01:57.113818 1079225 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:01:57.114329 1079225 addons.go:70] Setting default-storageclass=true in profile "newest-cni-467060"
	I1119 23:01:57.114360 1079225 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-467060"
	I1119 23:01:57.114688 1079225 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:01:57.116916 1079225 out.go:179] * Verifying Kubernetes components...
	I1119 23:01:57.124243 1079225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:01:57.157890 1079225 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:01:57.161065 1079225 addons.go:239] Setting addon default-storageclass=true in "newest-cni-467060"
	I1119 23:01:57.161108 1079225 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:01:57.161553 1079225 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:01:57.161739 1079225 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:01:57.161756 1079225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:01:57.161794 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:57.194530 1079225 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:01:57.194553 1079225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:01:57.194635 1079225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:01:57.208593 1079225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:01:57.236574 1079225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:01:57.595575 1079225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:01:57.667998 1079225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:01:57.794639 1079225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 23:01:57.794782 1079225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:01:59.251555 1079225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.58346672s)
	I1119 23:01:59.251832 1079225 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.456932637s)
	I1119 23:01:59.252065 1079225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.457247175s)
	I1119 23:01:59.252082 1079225 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 23:01:59.253731 1079225 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:01:59.253963 1079225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:01:59.256272 1079225 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 23:01:59.259174 1079225 addons.go:515] duration metric: took 2.145969629s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 23:01:59.300713 1079225 api_server.go:72] duration metric: took 2.187945543s to wait for apiserver process to appear ...
	I1119 23:01:59.300742 1079225 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:01:59.300760 1079225 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:01:59.325547 1079225 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 23:01:59.329907 1079225 api_server.go:141] control plane version: v1.34.1
	I1119 23:01:59.329949 1079225 api_server.go:131] duration metric: took 29.199806ms to wait for apiserver health ...
	I1119 23:01:59.329959 1079225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:01:59.351758 1079225 system_pods.go:59] 9 kube-system pods found
	I1119 23:01:59.351805 1079225 system_pods.go:61] "coredns-66bc5c9577-8xn65" [ef6f99cd-44ff-4adf-bb68-7328e6d5178e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 23:01:59.351814 1079225 system_pods.go:61] "coredns-66bc5c9577-bncpp" [17b33c0a-8ce3-4ed7-a40e-d37adbb06aed] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 23:01:59.351830 1079225 system_pods.go:61] "etcd-newest-cni-467060" [3ab9e3f1-b893-4477-b3e5-5d3f99a18ea0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:01:59.351835 1079225 system_pods.go:61] "kindnet-4sgcn" [eeb9b480-cec1-4be0-a705-e73199a83c5d] Running
	I1119 23:01:59.351841 1079225 system_pods.go:61] "kube-apiserver-newest-cni-467060" [8d6e3d44-2b2e-49ce-b2e7-0e21ed742414] Running
	I1119 23:01:59.351847 1079225 system_pods.go:61] "kube-controller-manager-newest-cni-467060" [5b42fbd4-53ad-4282-8760-3d28be1b3a9f] Running
	I1119 23:01:59.351857 1079225 system_pods.go:61] "kube-proxy-ldb2r" [cdecedd2-bfb5-4826-be33-924e26a05b88] Running
	I1119 23:01:59.351862 1079225 system_pods.go:61] "kube-scheduler-newest-cni-467060" [91153281-645c-4b6e-9408-1d3946edc224] Running
	I1119 23:01:59.351873 1079225 system_pods.go:61] "storage-provisioner" [0fb6f975-1da8-41b4-91f4-240a5daf116b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 23:01:59.351879 1079225 system_pods.go:74] duration metric: took 21.914731ms to wait for pod list to return data ...
	I1119 23:01:59.351894 1079225 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:01:59.367740 1079225 default_sa.go:45] found service account: "default"
	I1119 23:01:59.367769 1079225 default_sa.go:55] duration metric: took 15.867786ms for default service account to be created ...
	I1119 23:01:59.367783 1079225 kubeadm.go:587] duration metric: took 2.255022094s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 23:01:59.367799 1079225 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:01:59.403672 1079225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 23:01:59.403709 1079225 node_conditions.go:123] node cpu capacity is 2
	I1119 23:01:59.403721 1079225 node_conditions.go:105] duration metric: took 35.915982ms to run NodePressure ...
	I1119 23:01:59.403734 1079225 start.go:242] waiting for startup goroutines ...
	I1119 23:01:59.755569 1079225 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-467060" context rescaled to 1 replicas
	I1119 23:01:59.755609 1079225 start.go:247] waiting for cluster config update ...
	I1119 23:01:59.755621 1079225 start.go:256] writing updated cluster config ...
	I1119 23:01:59.755917 1079225 ssh_runner.go:195] Run: rm -f paused
	I1119 23:01:59.855783 1079225 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 23:01:59.859194 1079225 out.go:179] * Done! kubectl is now configured to use "newest-cni-467060" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.619264808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.623234573Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8a2cd4f3-8981-4c94-a432-a8d28e7bf44b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.640097609Z" level=info msg="Running pod sandbox: kube-system/kindnet-4sgcn/POD" id=d0303ab8-c0a3-43d9-9166-e0a78c13122b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.640279133Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.659792989Z" level=info msg="Ran pod sandbox 4e37ebd40e31fa395d83042c076e5c371debad03dcb0ea258eaba1641664d44c with infra container: kube-system/kube-proxy-ldb2r/POD" id=8a2cd4f3-8981-4c94-a432-a8d28e7bf44b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.665011306Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dbe389cb-6048-46c6-b17d-a7d9d31e37d5 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.666433004Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=366f0dd6-e42e-4b41-b3d2-3a0a2730cbe1 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.688837216Z" level=info msg="Creating container: kube-system/kube-proxy-ldb2r/kube-proxy" id=1fcad4d9-9319-42b0-ae91-e060e6943c68 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.689123282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.694348984Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d0303ab8-c0a3-43d9-9166-e0a78c13122b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.745736851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.754266909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.757698898Z" level=info msg="Ran pod sandbox 69bf95aec6dca48e9c12735fed1ff38ab49e02ed486188e050fb502c3c4ec830 with infra container: kube-system/kindnet-4sgcn/POD" id=d0303ab8-c0a3-43d9-9166-e0a78c13122b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.809265466Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=897e8e5a-be1d-40b7-8973-6e79f2ddee27 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.816481535Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=97a546a5-d35a-431f-9ce7-b6f51b767ee1 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.832311036Z" level=info msg="Creating container: kube-system/kindnet-4sgcn/kindnet-cni" id=68665774-d625-40ba-a3ab-5acf30414187 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.832612725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.84591235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.84666937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.907355524Z" level=info msg="Created container d97de1678d12826f12a564a7cf4381f58be22d5c47965bd90bc63b42478cce40: kube-system/kube-proxy-ldb2r/kube-proxy" id=1fcad4d9-9319-42b0-ae91-e060e6943c68 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.910418639Z" level=info msg="Starting container: d97de1678d12826f12a564a7cf4381f58be22d5c47965bd90bc63b42478cce40" id=8ad7ab33-1ae6-473e-b5ae-7952026cdef2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.92101534Z" level=info msg="Started container" PID=1398 containerID=d97de1678d12826f12a564a7cf4381f58be22d5c47965bd90bc63b42478cce40 description=kube-system/kube-proxy-ldb2r/kube-proxy id=8ad7ab33-1ae6-473e-b5ae-7952026cdef2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e37ebd40e31fa395d83042c076e5c371debad03dcb0ea258eaba1641664d44c
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.942346172Z" level=info msg="Created container e71dbf85a7758b681cd0f3b1c348636c2415800cef54c26fba1c5926944b2a75: kube-system/kindnet-4sgcn/kindnet-cni" id=68665774-d625-40ba-a3ab-5acf30414187 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.955241215Z" level=info msg="Starting container: e71dbf85a7758b681cd0f3b1c348636c2415800cef54c26fba1c5926944b2a75" id=dbeeb59c-fe8a-4b67-811f-9aae416f8c06 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:01:56 newest-cni-467060 crio[838]: time="2025-11-19T23:01:56.957358033Z" level=info msg="Started container" PID=1401 containerID=e71dbf85a7758b681cd0f3b1c348636c2415800cef54c26fba1c5926944b2a75 description=kube-system/kindnet-4sgcn/kindnet-cni id=dbeeb59c-fe8a-4b67-811f-9aae416f8c06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=69bf95aec6dca48e9c12735fed1ff38ab49e02ed486188e050fb502c3c4ec830
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e71dbf85a7758       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               0                   69bf95aec6dca       kindnet-4sgcn                               kube-system
	d97de1678d128       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                0                   4e37ebd40e31f       kube-proxy-ldb2r                            kube-system
	868e23e3a37d5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago      Running             kube-apiserver            0                   b5221588a079d       kube-apiserver-newest-cni-467060            kube-system
	c811d6d4a3e0b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago      Running             kube-scheduler            0                   6d6f916e65115       kube-scheduler-newest-cni-467060            kube-system
	68af3eba84ac9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago      Running             etcd                      0                   27e3f26281a3b       etcd-newest-cni-467060                      kube-system
	bf04bf2ca516a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago      Running             kube-controller-manager   0                   91b13baec6ee2       kube-controller-manager-newest-cni-467060   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-467060
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-467060
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=newest-cni-467060
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T23_01_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 23:01:48 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-467060
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:01:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:01:52 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:01:52 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:01:52 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 23:01:52 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-467060
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                ff8061df-c1de-4a74-aef4-0d63b13a9d04
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-467060                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-4sgcn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6s
	  kube-system                 kube-apiserver-newest-cni-467060             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-controller-manager-newest-cni-467060    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-proxy-ldb2r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kube-system                 kube-scheduler-newest-cni-467060             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x8 over 22s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10s                kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s                kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s                kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7s                 node-controller  Node newest-cni-467060 event: Registered Node newest-cni-467060 in Controller
	
	
	==> dmesg <==
	[  +5.340865] overlayfs: idmapped layers are currently not supported
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	[Nov19 23:00] overlayfs: idmapped layers are currently not supported
	[ +12.401560] overlayfs: idmapped layers are currently not supported
	[Nov19 23:01] overlayfs: idmapped layers are currently not supported
	[ +13.188823] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [68af3eba84ac9b480528dd738fed379f9ba58c3fc3311385c2b66c4328d439fb] <==
	{"level":"warn","ts":"2025-11-19T23:01:46.415023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.446021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.478858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.509932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.535058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.559316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.580559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.606002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.622059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.683095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.696143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.714679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.751682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.783075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.799853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.827595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.854973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.877884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.895305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.908664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:46.932335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:47.001488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:47.003188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:47.037060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:01:47.199434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:02:02 up  4:44,  0 user,  load average: 5.00, 3.55, 2.78
	Linux newest-cni-467060 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e71dbf85a7758b681cd0f3b1c348636c2415800cef54c26fba1c5926944b2a75] <==
	I1119 23:01:57.142628       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 23:01:57.144722       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 23:01:57.144940       1 main.go:148] setting mtu 1500 for CNI 
	I1119 23:01:57.144955       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 23:01:57.144974       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T23:01:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 23:01:57.475746       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 23:01:57.475834       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 23:01:57.475868       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 23:01:57.476684       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [868e23e3a37d5b516e432cb70e271bf16f111fa757e38a97f6c35d49e11602f0] <==
	I1119 23:01:48.708832       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 23:01:48.708840       1 cache.go:39] Caches are synced for autoregister controller
	I1119 23:01:48.723640       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 23:01:48.757034       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:01:48.757134       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 23:01:48.827459       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:01:48.830464       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:01:48.830569       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 23:01:49.217792       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 23:01:49.230906       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 23:01:49.230939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:01:50.278307       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:01:50.343647       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:01:50.557986       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 23:01:50.582622       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 23:01:50.584272       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:01:50.608762       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 23:01:50.610576       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 23:01:51.669891       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:01:51.703952       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 23:01:51.717032       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 23:01:56.254463       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 23:01:56.404011       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:01:56.807403       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:01:56.937995       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [bf04bf2ca516a102e604c145b53afa1bae425d4a8800bc5851e9b12afa912dd9] <==
	I1119 23:01:55.685243       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 23:01:55.685553       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 23:01:55.685571       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 23:01:55.685582       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 23:01:55.682974       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 23:01:55.704645       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:01:55.704741       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 23:01:55.710914       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 23:01:55.724691       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 23:01:55.724876       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 23:01:55.731090       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 23:01:55.731241       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 23:01:55.731258       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 23:01:55.731296       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 23:01:55.731357       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 23:01:55.731390       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:01:55.731439       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:01:55.731508       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-467060"
	I1119 23:01:55.731542       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 23:01:55.731570       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 23:01:55.731709       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:01:55.731717       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:01:55.731721       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:01:55.756825       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:01:55.762410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [d97de1678d12826f12a564a7cf4381f58be22d5c47965bd90bc63b42478cce40] <==
	I1119 23:01:57.656158       1 server_linux.go:53] "Using iptables proxy"
	I1119 23:01:57.812503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:01:57.913184       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:01:57.913223       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 23:01:57.913307       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:01:57.990768       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 23:01:57.990930       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:01:58.014492       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:01:58.015359       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:01:58.015449       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:01:58.024436       1 config.go:200] "Starting service config controller"
	I1119 23:01:58.024572       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:01:58.024632       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:01:58.024666       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:01:58.024716       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:01:58.024764       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:01:58.025482       1 config.go:309] "Starting node config controller"
	I1119 23:01:58.044940       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:01:58.045069       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:01:58.125284       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:01:58.125328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:01:58.125366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c811d6d4a3e0b1ccc7bfa2afac67c358cf5d278094fb2a1d19f0c69195244e06] <==
	E1119 23:01:48.819514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 23:01:48.819649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 23:01:48.819791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 23:01:48.820027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 23:01:48.818558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 23:01:48.842659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 23:01:48.842852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 23:01:48.843041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 23:01:48.843227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 23:01:48.843338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 23:01:48.845582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 23:01:48.845780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 23:01:48.845949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 23:01:48.846086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 23:01:48.846448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 23:01:48.846551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 23:01:49.649189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 23:01:49.710784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 23:01:49.793816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 23:01:49.802361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 23:01:49.835040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 23:01:49.862807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 23:01:49.874101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 23:01:49.989708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1119 23:01:52.270744       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:01:52 newest-cni-467060 kubelet[1291]: I1119 23:01:52.223497    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b7df894626766d258742b49ba2cef21-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-467060\" (UID: \"1b7df894626766d258742b49ba2cef21\") " pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:01:52 newest-cni-467060 kubelet[1291]: I1119 23:01:52.223516    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b7df894626766d258742b49ba2cef21-kubeconfig\") pod \"kube-controller-manager-newest-cni-467060\" (UID: \"1b7df894626766d258742b49ba2cef21\") " pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:01:52 newest-cni-467060 kubelet[1291]: I1119 23:01:52.223538    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b7df894626766d258742b49ba2cef21-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-467060\" (UID: \"1b7df894626766d258742b49ba2cef21\") " pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:01:52 newest-cni-467060 kubelet[1291]: I1119 23:01:52.752532    1291 apiserver.go:52] "Watching apiserver"
	Nov 19 23:01:52 newest-cni-467060 kubelet[1291]: I1119 23:01:52.819962    1291 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 23:01:52 newest-cni-467060 kubelet[1291]: I1119 23:01:52.955483    1291 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-467060"
	Nov 19 23:01:52 newest-cni-467060 kubelet[1291]: E1119 23:01:52.998757    1291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-467060\" already exists" pod="kube-system/kube-apiserver-newest-cni-467060"
	Nov 19 23:01:53 newest-cni-467060 kubelet[1291]: I1119 23:01:53.070234    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-467060" podStartSLOduration=1.070203075 podStartE2EDuration="1.070203075s" podCreationTimestamp="2025-11-19 23:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:01:53.035489902 +0000 UTC m=+1.457352911" watchObservedRunningTime="2025-11-19 23:01:53.070203075 +0000 UTC m=+1.492066084"
	Nov 19 23:01:53 newest-cni-467060 kubelet[1291]: I1119 23:01:53.100064    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-467060" podStartSLOduration=1.100042787 podStartE2EDuration="1.100042787s" podCreationTimestamp="2025-11-19 23:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:01:53.070653425 +0000 UTC m=+1.492516459" watchObservedRunningTime="2025-11-19 23:01:53.100042787 +0000 UTC m=+1.521905788"
	Nov 19 23:01:53 newest-cni-467060 kubelet[1291]: I1119 23:01:53.100189    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-467060" podStartSLOduration=1.100184795 podStartE2EDuration="1.100184795s" podCreationTimestamp="2025-11-19 23:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:01:53.095260396 +0000 UTC m=+1.517123397" watchObservedRunningTime="2025-11-19 23:01:53.100184795 +0000 UTC m=+1.522047812"
	Nov 19 23:01:53 newest-cni-467060 kubelet[1291]: I1119 23:01:53.147584    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-467060" podStartSLOduration=1.147564767 podStartE2EDuration="1.147564767s" podCreationTimestamp="2025-11-19 23:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:01:53.118375702 +0000 UTC m=+1.540238711" watchObservedRunningTime="2025-11-19 23:01:53.147564767 +0000 UTC m=+1.569427768"
	Nov 19 23:01:55 newest-cni-467060 kubelet[1291]: I1119 23:01:55.755538    1291 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 23:01:55 newest-cni-467060 kubelet[1291]: I1119 23:01:55.756242    1291 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.353906    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cdecedd2-bfb5-4826-be33-924e26a05b88-kube-proxy\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.353957    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdecedd2-bfb5-4826-be33-924e26a05b88-xtables-lock\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.353978    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdecedd2-bfb5-4826-be33-924e26a05b88-lib-modules\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.353999    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrlk\" (UniqueName: \"kubernetes.io/projected/cdecedd2-bfb5-4826-be33-924e26a05b88-kube-api-access-2jrlk\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.354021    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-cni-cfg\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.354040    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-xtables-lock\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.354057    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-lib-modules\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.354087    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtbn\" (UniqueName: \"kubernetes.io/projected/eeb9b480-cec1-4be0-a705-e73199a83c5d-kube-api-access-zrtbn\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: I1119 23:01:56.522017    1291 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 23:01:56 newest-cni-467060 kubelet[1291]: W1119 23:01:56.638630    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/crio-4e37ebd40e31fa395d83042c076e5c371debad03dcb0ea258eaba1641664d44c WatchSource:0}: Error finding container 4e37ebd40e31fa395d83042c076e5c371debad03dcb0ea258eaba1641664d44c: Status 404 returned error can't find the container with id 4e37ebd40e31fa395d83042c076e5c371debad03dcb0ea258eaba1641664d44c
	Nov 19 23:01:58 newest-cni-467060 kubelet[1291]: I1119 23:01:58.056726    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ldb2r" podStartSLOduration=2.056702791 podStartE2EDuration="2.056702791s" podCreationTimestamp="2025-11-19 23:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:01:57.28979129 +0000 UTC m=+5.711654709" watchObservedRunningTime="2025-11-19 23:01:58.056702791 +0000 UTC m=+6.478565874"
	Nov 19 23:01:58 newest-cni-467060 kubelet[1291]: I1119 23:01:58.195110    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4sgcn" podStartSLOduration=2.19509117 podStartE2EDuration="2.19509117s" podCreationTimestamp="2025-11-19 23:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:01:58.067049308 +0000 UTC m=+6.488912350" watchObservedRunningTime="2025-11-19 23:01:58.19509117 +0000 UTC m=+6.616954179"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-467060 -n newest-cni-467060
E1119 23:02:02.854462  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-467060 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-8xn65 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner: exit status 1 (97.473727ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-8xn65" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-467060 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-467060 --alsologtostderr -v=1: exit status 80 (1.962173497s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-467060 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:02:22.095062 1087815 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:02:22.095229 1087815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:02:22.095262 1087815 out.go:374] Setting ErrFile to fd 2...
	I1119 23:02:22.095284 1087815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:02:22.095576 1087815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:02:22.095864 1087815 out.go:368] Setting JSON to false
	I1119 23:02:22.095921 1087815 mustload.go:66] Loading cluster: newest-cni-467060
	I1119 23:02:22.096333 1087815 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:22.096872 1087815 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:22.116677 1087815 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:22.117014 1087815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:02:22.189960 1087815 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 23:02:22.17514583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:02:22.190605 1087815 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-467060 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 23:02:22.194120 1087815 out.go:179] * Pausing node newest-cni-467060 ... 
	I1119 23:02:22.197015 1087815 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:22.197350 1087815 ssh_runner.go:195] Run: systemctl --version
	I1119 23:02:22.197402 1087815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:22.219457 1087815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:22.321927 1087815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:02:22.336126 1087815 pause.go:52] kubelet running: true
	I1119 23:02:22.336215 1087815 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:02:22.555823 1087815 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:02:22.555913 1087815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:02:22.637939 1087815 cri.go:89] found id: "999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18"
	I1119 23:02:22.637972 1087815 cri.go:89] found id: "02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3"
	I1119 23:02:22.637977 1087815 cri.go:89] found id: "7aeae19d9745132d85cc5b537dab425758029fa33dd54f6c38b607d98deec3a2"
	I1119 23:02:22.637981 1087815 cri.go:89] found id: "ab32007c8b4af3d4c93ad3fb609cc56c5abd60cfeae54766dca8e9a38558d9ac"
	I1119 23:02:22.637983 1087815 cri.go:89] found id: "991708e4be3467882305f88186093de4312c73bd49f4d0b79a263a75f28b5484"
	I1119 23:02:22.637987 1087815 cri.go:89] found id: "55e2d76de39f6e5865a599e8ecf478e4e67e1dd969e8aad0e1d82e17fb6e08ee"
	I1119 23:02:22.637990 1087815 cri.go:89] found id: ""
	I1119 23:02:22.638045 1087815 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:02:22.652777 1087815 retry.go:31] will retry after 282.737867ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:02:22Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:02:22.936121 1087815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:02:22.956318 1087815 pause.go:52] kubelet running: false
	I1119 23:02:22.956394 1087815 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:02:23.165931 1087815 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:02:23.166012 1087815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:02:23.238004 1087815 cri.go:89] found id: "999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18"
	I1119 23:02:23.238038 1087815 cri.go:89] found id: "02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3"
	I1119 23:02:23.238044 1087815 cri.go:89] found id: "7aeae19d9745132d85cc5b537dab425758029fa33dd54f6c38b607d98deec3a2"
	I1119 23:02:23.238047 1087815 cri.go:89] found id: "ab32007c8b4af3d4c93ad3fb609cc56c5abd60cfeae54766dca8e9a38558d9ac"
	I1119 23:02:23.238051 1087815 cri.go:89] found id: "991708e4be3467882305f88186093de4312c73bd49f4d0b79a263a75f28b5484"
	I1119 23:02:23.238055 1087815 cri.go:89] found id: "55e2d76de39f6e5865a599e8ecf478e4e67e1dd969e8aad0e1d82e17fb6e08ee"
	I1119 23:02:23.238058 1087815 cri.go:89] found id: ""
	I1119 23:02:23.238113 1087815 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:02:23.249622 1087815 retry.go:31] will retry after 396.571078ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:02:23Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:02:23.647363 1087815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:02:23.666823 1087815 pause.go:52] kubelet running: false
	I1119 23:02:23.666998 1087815 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 23:02:23.892102 1087815 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 23:02:23.892235 1087815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 23:02:23.965896 1087815 cri.go:89] found id: "999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18"
	I1119 23:02:23.965923 1087815 cri.go:89] found id: "02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3"
	I1119 23:02:23.965928 1087815 cri.go:89] found id: "7aeae19d9745132d85cc5b537dab425758029fa33dd54f6c38b607d98deec3a2"
	I1119 23:02:23.965932 1087815 cri.go:89] found id: "ab32007c8b4af3d4c93ad3fb609cc56c5abd60cfeae54766dca8e9a38558d9ac"
	I1119 23:02:23.965935 1087815 cri.go:89] found id: "991708e4be3467882305f88186093de4312c73bd49f4d0b79a263a75f28b5484"
	I1119 23:02:23.965939 1087815 cri.go:89] found id: "55e2d76de39f6e5865a599e8ecf478e4e67e1dd969e8aad0e1d82e17fb6e08ee"
	I1119 23:02:23.965942 1087815 cri.go:89] found id: ""
	I1119 23:02:23.966008 1087815 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 23:02:23.981391 1087815 out.go:203] 
	W1119 23:02:23.984264 1087815 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 23:02:23.984285 1087815 out.go:285] * 
	* 
	W1119 23:02:23.991223 1087815 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 23:02:23.994353 1087815 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-467060 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-467060
helpers_test.go:243: (dbg) docker inspect newest-cni-467060:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293",
	        "Created": "2025-11-19T23:01:19.724337429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1085823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T23:02:05.605604536Z",
	            "FinishedAt": "2025-11-19T23:02:04.629626656Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/hostname",
	        "HostsPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/hosts",
	        "LogPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293-json.log",
	        "Name": "/newest-cni-467060",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-467060:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-467060",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293",
	                "LowerDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-467060",
	                "Source": "/var/lib/docker/volumes/newest-cni-467060/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-467060",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-467060",
	                "name.minikube.sigs.k8s.io": "newest-cni-467060",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0367df8194dd41f0baa2ccce7b8d7f3dc1644a17ab99b731de0788789ff05716",
	            "SandboxKey": "/var/run/docker/netns/0367df8194dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33892"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-467060": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:6c:71:0e:cc:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5606ddc27f5d747e0e03f70d6f5351c9e8418cffd201d9b7b94a06728f9f0e86",
	                    "EndpointID": "884de7ab31437d485e67fa06b93dc328735e02882b76fff4e5c480c1ed59d65e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-467060",
	                        "373502afc116"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060: exit status 2 (355.646999ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-467060 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-467060 logs -n 25: (1.134698989s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-841969 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ embed-certs-044665 image list --format=json                                                                                                                                                                                                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p embed-certs-044665 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-841969 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-841969 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-841969                                                                                                                                                                                                               │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p default-k8s-diff-port-841969                                                                                                                                                                                                               │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p auto-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-334366                  │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-467060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ stop    │ -p newest-cni-467060 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-467060 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ start   │ -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ image   │ newest-cni-467060 image list --format=json                                                                                                                                                                                                    │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ pause   │ -p newest-cni-467060 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:02:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:02:05.294636 1085695 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:02:05.294976 1085695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:02:05.295014 1085695 out.go:374] Setting ErrFile to fd 2...
	I1119 23:02:05.295039 1085695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:02:05.295389 1085695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:02:05.295852 1085695 out.go:368] Setting JSON to false
	I1119 23:02:05.296948 1085695 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17054,"bootTime":1763576271,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 23:02:05.297044 1085695 start.go:143] virtualization:  
	I1119 23:02:05.300704 1085695 out.go:179] * [newest-cni-467060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 23:02:05.304931 1085695 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:02:05.305109 1085695 notify.go:221] Checking for updates...
	I1119 23:02:05.309131 1085695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:02:05.311993 1085695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:05.314958 1085695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 23:02:05.317875 1085695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 23:02:05.320576 1085695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:02:05.323867 1085695 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:05.324498 1085695 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:02:05.361509 1085695 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 23:02:05.361739 1085695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:02:05.435937 1085695 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:02:05.42664665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:02:05.436051 1085695 docker.go:319] overlay module found
	I1119 23:02:05.439172 1085695 out.go:179] * Using the docker driver based on existing profile
	I1119 23:02:05.441961 1085695 start.go:309] selected driver: docker
	I1119 23:02:05.441982 1085695 start.go:930] validating driver "docker" against &{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:02:05.442086 1085695 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:02:05.442842 1085695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:02:05.507590 1085695 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:02:05.496902931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:02:05.507944 1085695 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 23:02:05.507974 1085695 cni.go:84] Creating CNI manager for ""
	I1119 23:02:05.508028 1085695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:02:05.508077 1085695 start.go:353] cluster config:
	{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:02:05.511234 1085695 out.go:179] * Starting "newest-cni-467060" primary control-plane node in "newest-cni-467060" cluster
	I1119 23:02:05.514106 1085695 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 23:02:05.517037 1085695 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 23:02:05.519806 1085695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:02:05.519860 1085695 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 23:02:05.519873 1085695 cache.go:65] Caching tarball of preloaded images
	I1119 23:02:05.519966 1085695 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 23:02:05.519979 1085695 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:02:05.520098 1085695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json ...
	I1119 23:02:05.520328 1085695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 23:02:05.540519 1085695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 23:02:05.540550 1085695 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 23:02:05.540570 1085695 cache.go:243] Successfully downloaded all kic artifacts
	I1119 23:02:05.540594 1085695 start.go:360] acquireMachinesLock for newest-cni-467060: {Name:mk24f21142ba5d810994dced903fd755f13fe1ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:02:05.540660 1085695 start.go:364] duration metric: took 41.707µs to acquireMachinesLock for "newest-cni-467060"
	I1119 23:02:05.540684 1085695 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:02:05.540696 1085695 fix.go:54] fixHost starting: 
	I1119 23:02:05.540988 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:05.560411 1085695 fix.go:112] recreateIfNeeded on newest-cni-467060: state=Stopped err=<nil>
	W1119 23:02:05.560443 1085695 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:02:04.340160 1081671 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 23:02:04.348394 1081671 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 23:02:04.348413 1081671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 23:02:04.379196 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 23:02:04.858204 1081671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 23:02:04.858371 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:04.858463 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-334366 minikube.k8s.io/updated_at=2025_11_19T23_02_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=auto-334366 minikube.k8s.io/primary=true
	I1119 23:02:05.197657 1081671 ops.go:34] apiserver oom_adj: -16
	I1119 23:02:05.197794 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:05.698691 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:06.198417 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:06.697899 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:07.198824 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:07.698694 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:08.198116 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:08.698750 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:09.197848 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:09.434088 1081671 kubeadm.go:1114] duration metric: took 4.575757186s to wait for elevateKubeSystemPrivileges
	I1119 23:02:09.434119 1081671 kubeadm.go:403] duration metric: took 27.205718046s to StartCluster
	I1119 23:02:09.434137 1081671 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:09.434202 1081671 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:09.435018 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:09.435251 1081671 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:02:09.435342 1081671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 23:02:09.435605 1081671 config.go:182] Loaded profile config "auto-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:09.435595 1081671 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:02:09.435679 1081671 addons.go:70] Setting storage-provisioner=true in profile "auto-334366"
	I1119 23:02:09.435695 1081671 addons.go:239] Setting addon storage-provisioner=true in "auto-334366"
	I1119 23:02:09.435723 1081671 host.go:66] Checking if "auto-334366" exists ...
	I1119 23:02:09.436213 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:02:09.436498 1081671 addons.go:70] Setting default-storageclass=true in profile "auto-334366"
	I1119 23:02:09.436517 1081671 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-334366"
	I1119 23:02:09.436779 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:02:09.440262 1081671 out.go:179] * Verifying Kubernetes components...
	I1119 23:02:09.445271 1081671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:09.479771 1081671 addons.go:239] Setting addon default-storageclass=true in "auto-334366"
	I1119 23:02:09.479813 1081671 host.go:66] Checking if "auto-334366" exists ...
	I1119 23:02:09.489019 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:02:09.492103 1081671 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:02:05.563551 1085695 out.go:252] * Restarting existing docker container for "newest-cni-467060" ...
	I1119 23:02:05.563635 1085695 cli_runner.go:164] Run: docker start newest-cni-467060
	I1119 23:02:05.875739 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:05.898836 1085695 kic.go:430] container "newest-cni-467060" state is running.
	I1119 23:02:05.899338 1085695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:02:05.922799 1085695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json ...
	I1119 23:02:05.923086 1085695 machine.go:94] provisionDockerMachine start ...
	I1119 23:02:05.923158 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:05.949847 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:05.950165 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:05.950177 1085695 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:02:05.951273 1085695 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 23:02:09.098808 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-467060
	
	I1119 23:02:09.098898 1085695 ubuntu.go:182] provisioning hostname "newest-cni-467060"
	I1119 23:02:09.099003 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:09.132077 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:09.132383 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:09.132394 1085695 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-467060 && echo "newest-cni-467060" | sudo tee /etc/hostname
	I1119 23:02:09.324237 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-467060
	
	I1119 23:02:09.324366 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:09.352560 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:09.352870 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:09.352887 1085695 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-467060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-467060/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-467060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:02:09.547384 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:02:09.547412 1085695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 23:02:09.547439 1085695 ubuntu.go:190] setting up certificates
	I1119 23:02:09.547450 1085695 provision.go:84] configureAuth start
	I1119 23:02:09.547515 1085695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:02:09.593488 1085695 provision.go:143] copyHostCerts
	I1119 23:02:09.593557 1085695 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 23:02:09.593567 1085695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 23:02:09.593643 1085695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 23:02:09.593745 1085695 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 23:02:09.593751 1085695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 23:02:09.593776 1085695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 23:02:09.593825 1085695 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 23:02:09.593829 1085695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 23:02:09.593851 1085695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 23:02:09.593895 1085695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.newest-cni-467060 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-467060]
	I1119 23:02:09.499024 1081671 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:09.499052 1081671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:02:09.499120 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:02:09.523187 1081671 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:09.523208 1081671 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:02:09.523277 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:02:09.556572 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:02:09.579017 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:02:09.886021 1081671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 23:02:09.926322 1081671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:02:10.219228 1081671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:10.222859 1081671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:10.913355 1081671 node_ready.go:35] waiting up to 15m0s for node "auto-334366" to be "Ready" ...
	I1119 23:02:10.915279 1081671 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.029223441s)
	I1119 23:02:10.915303 1081671 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 23:02:11.419570 1081671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.196600383s)
	I1119 23:02:11.422698 1081671 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 23:02:10.616217 1085695 provision.go:177] copyRemoteCerts
	I1119 23:02:10.616331 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:02:10.616393 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:10.647257 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:10.765168 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:02:10.795556 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 23:02:10.823430 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 23:02:10.850533 1085695 provision.go:87] duration metric: took 1.303066268s to configureAuth
	I1119 23:02:10.850562 1085695 ubuntu.go:206] setting minikube options for container-runtime
	I1119 23:02:10.850817 1085695 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:10.851027 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:10.883132 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:10.883495 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:10.883517 1085695 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:02:11.333185 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:02:11.333214 1085695 machine.go:97] duration metric: took 5.410107444s to provisionDockerMachine
	I1119 23:02:11.333234 1085695 start.go:293] postStartSetup for "newest-cni-467060" (driver="docker")
	I1119 23:02:11.333244 1085695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:02:11.333345 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:02:11.333420 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.359899 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.471820 1085695 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:02:11.475270 1085695 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 23:02:11.475302 1085695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 23:02:11.475318 1085695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 23:02:11.475375 1085695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 23:02:11.475455 1085695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 23:02:11.475560 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:02:11.484510 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:02:11.503362 1085695 start.go:296] duration metric: took 170.112595ms for postStartSetup
	I1119 23:02:11.503507 1085695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:02:11.503569 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.521682 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.620197 1085695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 23:02:11.625134 1085695 fix.go:56] duration metric: took 6.08442339s for fixHost
	I1119 23:02:11.625161 1085695 start.go:83] releasing machines lock for "newest-cni-467060", held for 6.084487743s
	I1119 23:02:11.625228 1085695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:02:11.644287 1085695 ssh_runner.go:195] Run: cat /version.json
	I1119 23:02:11.644337 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.644378 1085695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:02:11.644443 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.666540 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.680293 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.869001 1085695 ssh_runner.go:195] Run: systemctl --version
	I1119 23:02:11.875880 1085695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:02:11.923918 1085695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:02:11.928479 1085695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:02:11.928572 1085695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:02:11.936557 1085695 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 23:02:11.936584 1085695 start.go:496] detecting cgroup driver to use...
	I1119 23:02:11.936644 1085695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 23:02:11.936718 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:02:11.953254 1085695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:02:11.967605 1085695 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:02:11.967668 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:02:11.983090 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:02:11.996582 1085695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:02:12.113552 1085695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:02:12.236173 1085695 docker.go:234] disabling docker service ...
	I1119 23:02:12.236244 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:02:12.251280 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:02:12.265234 1085695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:02:12.383634 1085695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:02:12.511714 1085695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:02:12.525931 1085695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:02:12.545379 1085695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:02:12.545517 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.554501 1085695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:02:12.554681 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.565217 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.576945 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.586216 1085695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:02:12.594473 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.605441 1085695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.613971 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.624423 1085695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:02:12.633463 1085695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:02:12.641744 1085695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:12.759313 1085695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:02:12.930885 1085695 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:02:12.930953 1085695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:02:12.936176 1085695 start.go:564] Will wait 60s for crictl version
	I1119 23:02:12.936251 1085695 ssh_runner.go:195] Run: which crictl
	I1119 23:02:12.947332 1085695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 23:02:12.979421 1085695 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 23:02:12.979596 1085695 ssh_runner.go:195] Run: crio --version
	I1119 23:02:13.018184 1085695 ssh_runner.go:195] Run: crio --version
	I1119 23:02:13.053655 1085695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 23:02:13.056467 1085695 cli_runner.go:164] Run: docker network inspect newest-cni-467060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:02:13.073455 1085695 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 23:02:13.077537 1085695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:02:13.090406 1085695 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 23:02:11.423161 1081671 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-334366" context rescaled to 1 replicas
	I1119 23:02:11.425518 1081671 addons.go:515] duration metric: took 1.989907587s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1119 23:02:12.916826 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	I1119 23:02:13.093134 1085695 kubeadm.go:884] updating cluster {Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:02:13.093284 1085695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:02:13.093361 1085695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:02:13.129032 1085695 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:02:13.129057 1085695 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:02:13.129114 1085695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:02:13.154579 1085695 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:02:13.154611 1085695 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:02:13.154620 1085695 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 23:02:13.154731 1085695 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-467060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:02:13.154819 1085695 ssh_runner.go:195] Run: crio config
	I1119 23:02:13.229453 1085695 cni.go:84] Creating CNI manager for ""
	I1119 23:02:13.229475 1085695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:02:13.229494 1085695 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 23:02:13.229517 1085695 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-467060 NodeName:newest-cni-467060 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:02:13.229652 1085695 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-467060"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:02:13.229729 1085695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:02:13.237402 1085695 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:02:13.237505 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 23:02:13.245093 1085695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 23:02:13.257796 1085695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:02:13.270831 1085695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1119 23:02:13.283787 1085695 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 23:02:13.287752 1085695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:02:13.297551 1085695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:13.414045 1085695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:02:13.434021 1085695 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060 for IP: 192.168.76.2
	I1119 23:02:13.434045 1085695 certs.go:195] generating shared ca certs ...
	I1119 23:02:13.434063 1085695 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:13.434206 1085695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 23:02:13.434259 1085695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 23:02:13.434272 1085695 certs.go:257] generating profile certs ...
	I1119 23:02:13.434364 1085695 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.key
	I1119 23:02:13.434433 1085695 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key.08ecb68a
	I1119 23:02:13.434479 1085695 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key
	I1119 23:02:13.434606 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 23:02:13.434642 1085695 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 23:02:13.434652 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 23:02:13.434677 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:02:13.434702 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:02:13.434727 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 23:02:13.434778 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:02:13.435373 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:02:13.462248 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:02:13.482580 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:02:13.505226 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 23:02:13.527191 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 23:02:13.557603 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 23:02:13.583070 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:02:13.604899 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 23:02:13.633874 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 23:02:13.655474 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:02:13.676933 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 23:02:13.698128 1085695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:02:13.712113 1085695 ssh_runner.go:195] Run: openssl version
	I1119 23:02:13.718979 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:02:13.728036 1085695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:02:13.732448 1085695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:02:13.732559 1085695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:02:13.776860 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:02:13.785182 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 23:02:13.793734 1085695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 23:02:13.797824 1085695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 23:02:13.797937 1085695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 23:02:13.839622 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 23:02:13.847894 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 23:02:13.856572 1085695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 23:02:13.860641 1085695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 23:02:13.860705 1085695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 23:02:13.907116 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:02:13.917177 1085695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:02:13.921256 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:02:13.962747 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:02:14.004124 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:02:14.049214 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:02:14.112868 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:02:14.203663 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:02:14.257570 1085695 kubeadm.go:401] StartCluster: {Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:02:14.257720 1085695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:02:14.257839 1085695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:02:14.368093 1085695 cri.go:89] found id: "7aeae19d9745132d85cc5b537dab425758029fa33dd54f6c38b607d98deec3a2"
	I1119 23:02:14.368158 1085695 cri.go:89] found id: "ab32007c8b4af3d4c93ad3fb609cc56c5abd60cfeae54766dca8e9a38558d9ac"
	I1119 23:02:14.368179 1085695 cri.go:89] found id: "991708e4be3467882305f88186093de4312c73bd49f4d0b79a263a75f28b5484"
	I1119 23:02:14.368203 1085695 cri.go:89] found id: "55e2d76de39f6e5865a599e8ecf478e4e67e1dd969e8aad0e1d82e17fb6e08ee"
	I1119 23:02:14.368229 1085695 cri.go:89] found id: ""
	I1119 23:02:14.368315 1085695 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 23:02:14.394336 1085695 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:02:14Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:02:14.394495 1085695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:02:14.409801 1085695 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:02:14.409865 1085695 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:02:14.409949 1085695 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:02:14.429012 1085695 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:02:14.429637 1085695 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-467060" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:14.429941 1085695 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-860325/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-467060" cluster setting kubeconfig missing "newest-cni-467060" context setting]
	I1119 23:02:14.430470 1085695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:14.432129 1085695 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:02:14.448950 1085695 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 23:02:14.449026 1085695 kubeadm.go:602] duration metric: took 39.140085ms to restartPrimaryControlPlane
	I1119 23:02:14.449052 1085695 kubeadm.go:403] duration metric: took 191.491291ms to StartCluster
	I1119 23:02:14.449106 1085695 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:14.449193 1085695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:14.450071 1085695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:14.450347 1085695 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:02:14.450714 1085695 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:02:14.450810 1085695 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-467060"
	I1119 23:02:14.450827 1085695 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-467060"
	W1119 23:02:14.450839 1085695 addons.go:248] addon storage-provisioner should already be in state true
	I1119 23:02:14.450850 1085695 addons.go:70] Setting dashboard=true in profile "newest-cni-467060"
	I1119 23:02:14.450882 1085695 addons.go:70] Setting default-storageclass=true in profile "newest-cni-467060"
	I1119 23:02:14.450906 1085695 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-467060"
	I1119 23:02:14.450951 1085695 addons.go:239] Setting addon dashboard=true in "newest-cni-467060"
	W1119 23:02:14.451070 1085695 addons.go:248] addon dashboard should already be in state true
	I1119 23:02:14.451127 1085695 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:14.451211 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.451752 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.450878 1085695 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:14.455457 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.450785 1085695 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:14.462185 1085695 out.go:179] * Verifying Kubernetes components...
	I1119 23:02:14.470994 1085695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:14.507056 1085695 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 23:02:14.507509 1085695 addons.go:239] Setting addon default-storageclass=true in "newest-cni-467060"
	W1119 23:02:14.507532 1085695 addons.go:248] addon default-storageclass should already be in state true
	I1119 23:02:14.507560 1085695 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:14.508011 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.518696 1085695 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 23:02:14.526986 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 23:02:14.527021 1085695 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 23:02:14.527095 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:14.544744 1085695 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:02:14.547990 1085695 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:14.548015 1085695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:02:14.548082 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:14.573961 1085695 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:14.573990 1085695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:02:14.574053 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:14.590951 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:14.596431 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:14.624660 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:14.855336 1085695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:14.872467 1085695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:02:14.956267 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 23:02:14.956293 1085695 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 23:02:14.987598 1085695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:15.084860 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 23:02:15.084887 1085695 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 23:02:15.162081 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 23:02:15.162104 1085695 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 23:02:15.186625 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 23:02:15.186648 1085695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 23:02:15.204893 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 23:02:15.204919 1085695 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 23:02:15.222746 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 23:02:15.222770 1085695 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 23:02:15.250432 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 23:02:15.250455 1085695 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 23:02:15.269921 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 23:02:15.269945 1085695 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 23:02:15.288629 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 23:02:15.288654 1085695 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W1119 23:02:15.417117 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	W1119 23:02:17.917256 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	I1119 23:02:15.310711 1085695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 23:02:20.632619 1085695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.777246074s)
	I1119 23:02:20.632662 1085695 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.760173058s)
	I1119 23:02:20.632693 1085695 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:02:20.632755 1085695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:02:20.632822 1085695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.645199962s)
	I1119 23:02:20.633167 1085695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.322423457s)
	I1119 23:02:20.636259 1085695 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-467060 addons enable metrics-server
	
	I1119 23:02:20.665253 1085695 api_server.go:72] duration metric: took 6.21483797s to wait for apiserver process to appear ...
	I1119 23:02:20.665280 1085695 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:02:20.665298 1085695 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:02:20.674132 1085695 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:02:20.674162 1085695 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:02:20.678507 1085695 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 23:02:20.681404 1085695 addons.go:515] duration metric: took 6.230678418s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 23:02:21.165695 1085695 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:02:21.174739 1085695 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 23:02:21.175797 1085695 api_server.go:141] control plane version: v1.34.1
	I1119 23:02:21.175828 1085695 api_server.go:131] duration metric: took 510.540035ms to wait for apiserver health ...
	I1119 23:02:21.175838 1085695 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:02:21.179324 1085695 system_pods.go:59] 8 kube-system pods found
	I1119 23:02:21.179371 1085695 system_pods.go:61] "coredns-66bc5c9577-8xn65" [ef6f99cd-44ff-4adf-bb68-7328e6d5178e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 23:02:21.179387 1085695 system_pods.go:61] "etcd-newest-cni-467060" [3ab9e3f1-b893-4477-b3e5-5d3f99a18ea0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:02:21.179396 1085695 system_pods.go:61] "kindnet-4sgcn" [eeb9b480-cec1-4be0-a705-e73199a83c5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 23:02:21.179411 1085695 system_pods.go:61] "kube-apiserver-newest-cni-467060" [8d6e3d44-2b2e-49ce-b2e7-0e21ed742414] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:02:21.179428 1085695 system_pods.go:61] "kube-controller-manager-newest-cni-467060" [5b42fbd4-53ad-4282-8760-3d28be1b3a9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:02:21.179436 1085695 system_pods.go:61] "kube-proxy-ldb2r" [cdecedd2-bfb5-4826-be33-924e26a05b88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 23:02:21.179446 1085695 system_pods.go:61] "kube-scheduler-newest-cni-467060" [91153281-645c-4b6e-9408-1d3946edc224] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:02:21.179454 1085695 system_pods.go:61] "storage-provisioner" [0fb6f975-1da8-41b4-91f4-240a5daf116b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 23:02:21.179461 1085695 system_pods.go:74] duration metric: took 3.616244ms to wait for pod list to return data ...
	I1119 23:02:21.179474 1085695 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:02:21.182114 1085695 default_sa.go:45] found service account: "default"
	I1119 23:02:21.182143 1085695 default_sa.go:55] duration metric: took 2.662152ms for default service account to be created ...
	I1119 23:02:21.182157 1085695 kubeadm.go:587] duration metric: took 6.731755873s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 23:02:21.182173 1085695 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:02:21.185048 1085695 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 23:02:21.185092 1085695 node_conditions.go:123] node cpu capacity is 2
	I1119 23:02:21.185106 1085695 node_conditions.go:105] duration metric: took 2.928033ms to run NodePressure ...
	I1119 23:02:21.185119 1085695 start.go:242] waiting for startup goroutines ...
	I1119 23:02:21.185127 1085695 start.go:247] waiting for cluster config update ...
	I1119 23:02:21.185141 1085695 start.go:256] writing updated cluster config ...
	I1119 23:02:21.185533 1085695 ssh_runner.go:195] Run: rm -f paused
	I1119 23:02:21.269578 1085695 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 23:02:21.273332 1085695 out.go:179] * Done! kubectl is now configured to use "newest-cni-467060" cluster and "default" namespace by default
	W1119 23:02:20.416734 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	W1119 23:02:22.917590 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.859176725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.867207932Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d459774a-aff5-4e1d-82ab-d4b8344385e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.867961227Z" level=info msg="Running pod sandbox: kube-system/kindnet-4sgcn/POD" id=72b82bdf-f91a-4fef-80bd-af4804a8b439 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.868018795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.878525248Z" level=info msg="Ran pod sandbox b83a25c67a98daec850fa23fe8ba8954ba035448063725cabf422456f24eb215 with infra container: kube-system/kube-proxy-ldb2r/POD" id=d459774a-aff5-4e1d-82ab-d4b8344385e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.881391595Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=72b82bdf-f91a-4fef-80bd-af4804a8b439 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.882230051Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2da69901-dfdc-4e9a-825f-8237f6685897 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.883383653Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=794f91cb-c310-4af9-be3b-4362a8d85f58 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.887710894Z" level=info msg="Creating container: kube-system/kube-proxy-ldb2r/kube-proxy" id=f63b506e-a5eb-460e-818a-e88f75f869fe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.887868073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.897641827Z" level=info msg="Ran pod sandbox cc7467a903f188f27a1d37854a6c5f25afc91496335bf1f8a9332b2c16138252 with infra container: kube-system/kindnet-4sgcn/POD" id=72b82bdf-f91a-4fef-80bd-af4804a8b439 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.906102363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.90662103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.907779981Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8213d64c-1440-4f5b-8895-20a896957d00 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.909012255Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8894f35e-16f3-46c3-8187-87d13ab80b7b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.910207202Z" level=info msg="Creating container: kube-system/kindnet-4sgcn/kindnet-cni" id=5e5919b5-dde9-44a7-b716-35b7a4affbd2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.910312377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.91781663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.918523023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.94981176Z" level=info msg="Created container 02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3: kube-system/kube-proxy-ldb2r/kube-proxy" id=f63b506e-a5eb-460e-818a-e88f75f869fe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.950930071Z" level=info msg="Starting container: 02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3" id=c6f1a871-4aa8-414a-b06a-205f0f3e563c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.952669204Z" level=info msg="Created container 999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18: kube-system/kindnet-4sgcn/kindnet-cni" id=5e5919b5-dde9-44a7-b716-35b7a4affbd2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.953915065Z" level=info msg="Starting container: 999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18" id=17f1f556-6d7b-45a5-a807-3c1b64f3954a name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.955803803Z" level=info msg="Started container" PID=1070 containerID=999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18 description=kube-system/kindnet-4sgcn/kindnet-cni id=17f1f556-6d7b-45a5-a807-3c1b64f3954a name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc7467a903f188f27a1d37854a6c5f25afc91496335bf1f8a9332b2c16138252
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.956322725Z" level=info msg="Started container" PID=1069 containerID=02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3 description=kube-system/kube-proxy-ldb2r/kube-proxy id=c6f1a871-4aa8-414a-b06a-205f0f3e563c name=/runtime.v1.RuntimeService/StartContainer sandboxID=b83a25c67a98daec850fa23fe8ba8954ba035448063725cabf422456f24eb215
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	999a7516f7fc3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   cc7467a903f18       kindnet-4sgcn                               kube-system
	02d180d011beb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   b83a25c67a98d       kube-proxy-ldb2r                            kube-system
	7aeae19d97451       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   eb7f311076cee       kube-controller-manager-newest-cni-467060   kube-system
	ab32007c8b4af       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   252366b5c63cf       kube-scheduler-newest-cni-467060            kube-system
	991708e4be346       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   e32aed6b3904a       kube-apiserver-newest-cni-467060            kube-system
	55e2d76de39f6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   7867a0d72f14c       etcd-newest-cni-467060                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-467060
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-467060
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=newest-cni-467060
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T23_01_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 23:01:48 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-467060
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:02:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-467060
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                ff8061df-c1de-4a74-aef4-0d63b13a9d04
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-467060                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-4sgcn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-467060             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-467060    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-ldb2r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-467060             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-467060 event: Registered Node newest-cni-467060 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-467060 event: Registered Node newest-cni-467060 in Controller
	
	
	==> dmesg <==
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	[Nov19 23:00] overlayfs: idmapped layers are currently not supported
	[ +12.401560] overlayfs: idmapped layers are currently not supported
	[Nov19 23:01] overlayfs: idmapped layers are currently not supported
	[ +13.188823] overlayfs: idmapped layers are currently not supported
	[Nov19 23:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [55e2d76de39f6e5865a599e8ecf478e4e67e1dd969e8aad0e1d82e17fb6e08ee] <==
	{"level":"warn","ts":"2025-11-19T23:02:18.009276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.046707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.078007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.146212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.159918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.189176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.223877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.248394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.270565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.297704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.336659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.358229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.377760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.392772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.409823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.429574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.452971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.486201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.528091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.542118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.585663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.591082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.606498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.627215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.686307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47316","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:02:25 up  4:44,  0 user,  load average: 5.51, 3.76, 2.86
	Linux newest-cni-467060 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18] <==
	I1119 23:02:21.120838       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 23:02:21.121811       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 23:02:21.122060       1 main.go:148] setting mtu 1500 for CNI 
	I1119 23:02:21.122127       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 23:02:21.122192       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T23:02:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 23:02:21.320601       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 23:02:21.320620       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 23:02:21.320628       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 23:02:21.320938       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [991708e4be3467882305f88186093de4312c73bd49f4d0b79a263a75f28b5484] <==
	I1119 23:02:19.570624       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 23:02:19.571357       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:02:19.571392       1 policy_source.go:240] refreshing policies
	I1119 23:02:19.585371       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:02:19.629497       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:02:19.648675       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:02:19.652600       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 23:02:19.652874       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 23:02:19.661527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:02:19.663410       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 23:02:19.663521       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:02:19.702514       1 cache.go:39] Caches are synced for autoregister controller
	E1119 23:02:19.716915       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 23:02:20.277216       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 23:02:20.316535       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:02:20.345326       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:02:20.355941       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:02:20.364323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:02:20.375331       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 23:02:20.461207       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.120.173"}
	I1119 23:02:20.493394       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.2.231"}
	I1119 23:02:23.111563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:02:23.357933       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 23:02:23.508254       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:02:23.561687       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7aeae19d9745132d85cc5b537dab425758029fa33dd54f6c38b607d98deec3a2] <==
	I1119 23:02:22.989114       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 23:02:22.992415       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 23:02:22.993624       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 23:02:23.001311       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 23:02:23.001529       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:02:23.001698       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:02:23.001909       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-467060"
	I1119 23:02:23.002138       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 23:02:23.012813       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:02:23.012912       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 23:02:23.012924       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 23:02:23.012935       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 23:02:23.012944       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 23:02:23.012954       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 23:02:23.012962       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 23:02:23.012978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:02:23.016009       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:02:23.016174       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:02:23.017225       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:02:23.023568       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:02:23.034419       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 23:02:23.035010       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 23:02:23.037229       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:02:23.037244       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 23:02:23.057726       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3] <==
	I1119 23:02:21.014173       1 server_linux.go:53] "Using iptables proxy"
	I1119 23:02:21.100403       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:02:21.201124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:02:21.201226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 23:02:21.201364       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:02:21.237866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 23:02:21.237983       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:02:21.241842       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:02:21.242215       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:02:21.242410       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:02:21.244344       1 config.go:200] "Starting service config controller"
	I1119 23:02:21.244410       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:02:21.244468       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:02:21.244497       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:02:21.244532       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:02:21.244558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:02:21.245267       1 config.go:309] "Starting node config controller"
	I1119 23:02:21.245336       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:02:21.245370       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:02:21.345322       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:02:21.345368       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:02:21.345399       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ab32007c8b4af3d4c93ad3fb609cc56c5abd60cfeae54766dca8e9a38558d9ac] <==
	I1119 23:02:17.264071       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:02:19.453598       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 23:02:19.453637       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 23:02:19.453651       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:02:19.453659       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:02:19.573250       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:02:19.578519       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:02:19.584691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:02:19.589996       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:02:19.590213       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:02:19.590650       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:02:19.691112       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.644418     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-467060\" already exists" pod="kube-system/kube-apiserver-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.644464     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.651345     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-467060\" already exists" pod="kube-system/etcd-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.669457     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-467060\" already exists" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.669650     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.691657     737 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.691764     737 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.691796     737 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.695105     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.719428     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-467060\" already exists" pod="kube-system/kube-scheduler-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.719463     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.745602     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-467060\" already exists" pod="kube-system/etcd-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.933667     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.944809     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-467060\" already exists" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.548268     737 apiserver.go:52] "Watching apiserver"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.584920     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585161     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdecedd2-bfb5-4826-be33-924e26a05b88-lib-modules\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585197     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-lib-modules\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-cni-cfg\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585263     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdecedd2-bfb5-4826-be33-924e26a05b88-xtables-lock\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585288     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-xtables-lock\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.614455     737 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 23:02:22 newest-cni-467060 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 23:02:22 newest-cni-467060 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 23:02:22 newest-cni-467060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-467060 -n newest-cni-467060
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-467060 -n newest-cni-467060: exit status 2 (348.884259ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-467060 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl: exit status 1 (90.928627ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-8xn65" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-6j6k8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-x5bpl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-467060
helpers_test.go:243: (dbg) docker inspect newest-cni-467060:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293",
	        "Created": "2025-11-19T23:01:19.724337429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1085823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T23:02:05.605604536Z",
	            "FinishedAt": "2025-11-19T23:02:04.629626656Z"
	        },
	        "Image": "sha256:161ae512ea03f95c595a46a20f1dbd1d1e737c6a82df3ed673e089531af665da",
	        "ResolvConfPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/hostname",
	        "HostsPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/hosts",
	        "LogPath": "/var/lib/docker/containers/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293/373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293-json.log",
	        "Name": "/newest-cni-467060",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-467060:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-467060",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "373502afc116738fcf3848395c19af5748895c1f871f2fb3aafb93000ec9d293",
	                "LowerDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc-init/diff:/var/lib/docker/overlay2/b33aa46ad3e25cf0b7d7cdd648432c8876f19868f774e9f94bad7445bc13d509/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ab44d2cbdacf2f7f343d06bea81dbefe35a1181e2f4cd70a378966eddc061cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-467060",
	                "Source": "/var/lib/docker/volumes/newest-cni-467060/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-467060",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-467060",
	                "name.minikube.sigs.k8s.io": "newest-cni-467060",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0367df8194dd41f0baa2ccce7b8d7f3dc1644a17ab99b731de0788789ff05716",
	            "SandboxKey": "/var/run/docker/netns/0367df8194dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33892"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-467060": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:6c:71:0e:cc:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5606ddc27f5d747e0e03f70d6f5351c9e8418cffd201d9b7b94a06728f9f0e86",
	                    "EndpointID": "884de7ab31437d485e67fa06b93dc328735e02882b76fff4e5c480c1ed59d65e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-467060",
	                        "373502afc116"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060: exit status 2 (347.15828ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-467060 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-467060 logs -n 25: (1.094851909s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:58 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-044665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p embed-certs-044665 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-841969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-841969 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 22:59 UTC │
	│ start   │ -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 22:59 UTC │ 19 Nov 25 23:00 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:00 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ embed-certs-044665 image list --format=json                                                                                                                                                                                                   │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p embed-certs-044665 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p embed-certs-044665                                                                                                                                                                                                                         │ embed-certs-044665           │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-841969 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-841969 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-841969                                                                                                                                                                                                               │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ delete  │ -p default-k8s-diff-port-841969                                                                                                                                                                                                               │ default-k8s-diff-port-841969 │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │ 19 Nov 25 23:01 UTC │
	│ start   │ -p auto-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-334366                  │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-467060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:01 UTC │                     │
	│ stop    │ -p newest-cni-467060 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-467060 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ start   │ -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ image   │ newest-cni-467060 image list --format=json                                                                                                                                                                                                    │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │ 19 Nov 25 23:02 UTC │
	│ pause   │ -p newest-cni-467060 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-467060            │ jenkins │ v1.37.0 │ 19 Nov 25 23:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:02:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:02:05.294636 1085695 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:02:05.294976 1085695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:02:05.295014 1085695 out.go:374] Setting ErrFile to fd 2...
	I1119 23:02:05.295039 1085695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:02:05.295389 1085695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 23:02:05.295852 1085695 out.go:368] Setting JSON to false
	I1119 23:02:05.296948 1085695 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17054,"bootTime":1763576271,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 23:02:05.297044 1085695 start.go:143] virtualization:  
	I1119 23:02:05.300704 1085695 out.go:179] * [newest-cni-467060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 23:02:05.304931 1085695 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:02:05.305109 1085695 notify.go:221] Checking for updates...
	I1119 23:02:05.309131 1085695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:02:05.311993 1085695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:05.314958 1085695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 23:02:05.317875 1085695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 23:02:05.320576 1085695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:02:05.323867 1085695 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:05.324498 1085695 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:02:05.361509 1085695 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 23:02:05.361739 1085695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:02:05.435937 1085695 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:02:05.42664665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:02:05.436051 1085695 docker.go:319] overlay module found
	I1119 23:02:05.439172 1085695 out.go:179] * Using the docker driver based on existing profile
	I1119 23:02:05.441961 1085695 start.go:309] selected driver: docker
	I1119 23:02:05.441982 1085695 start.go:930] validating driver "docker" against &{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:02:05.442086 1085695 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:02:05.442842 1085695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 23:02:05.507590 1085695 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 23:02:05.496902931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 23:02:05.507944 1085695 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 23:02:05.507974 1085695 cni.go:84] Creating CNI manager for ""
	I1119 23:02:05.508028 1085695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:02:05.508077 1085695 start.go:353] cluster config:
	{Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:02:05.511234 1085695 out.go:179] * Starting "newest-cni-467060" primary control-plane node in "newest-cni-467060" cluster
	I1119 23:02:05.514106 1085695 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 23:02:05.517037 1085695 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 23:02:05.519806 1085695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:02:05.519860 1085695 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 23:02:05.519873 1085695 cache.go:65] Caching tarball of preloaded images
	I1119 23:02:05.519966 1085695 preload.go:238] Found /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 23:02:05.519979 1085695 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:02:05.520098 1085695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json ...
	I1119 23:02:05.520328 1085695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 23:02:05.540519 1085695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 23:02:05.540550 1085695 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 23:02:05.540570 1085695 cache.go:243] Successfully downloaded all kic artifacts
	I1119 23:02:05.540594 1085695 start.go:360] acquireMachinesLock for newest-cni-467060: {Name:mk24f21142ba5d810994dced903fd755f13fe1ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:02:05.540660 1085695 start.go:364] duration metric: took 41.707µs to acquireMachinesLock for "newest-cni-467060"
	I1119 23:02:05.540684 1085695 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:02:05.540696 1085695 fix.go:54] fixHost starting: 
	I1119 23:02:05.540988 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:05.560411 1085695 fix.go:112] recreateIfNeeded on newest-cni-467060: state=Stopped err=<nil>
	W1119 23:02:05.560443 1085695 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:02:04.340160 1081671 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 23:02:04.348394 1081671 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 23:02:04.348413 1081671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 23:02:04.379196 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 23:02:04.858204 1081671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 23:02:04.858371 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:04.858463 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-334366 minikube.k8s.io/updated_at=2025_11_19T23_02_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=auto-334366 minikube.k8s.io/primary=true
	I1119 23:02:05.197657 1081671 ops.go:34] apiserver oom_adj: -16
	I1119 23:02:05.197794 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:05.698691 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:06.198417 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:06.697899 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:07.198824 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:07.698694 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:08.198116 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:08.698750 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:09.197848 1081671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 23:02:09.434088 1081671 kubeadm.go:1114] duration metric: took 4.575757186s to wait for elevateKubeSystemPrivileges
	I1119 23:02:09.434119 1081671 kubeadm.go:403] duration metric: took 27.205718046s to StartCluster
	I1119 23:02:09.434137 1081671 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:09.434202 1081671 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:09.435018 1081671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:09.435251 1081671 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:02:09.435342 1081671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 23:02:09.435605 1081671 config.go:182] Loaded profile config "auto-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:09.435595 1081671 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:02:09.435679 1081671 addons.go:70] Setting storage-provisioner=true in profile "auto-334366"
	I1119 23:02:09.435695 1081671 addons.go:239] Setting addon storage-provisioner=true in "auto-334366"
	I1119 23:02:09.435723 1081671 host.go:66] Checking if "auto-334366" exists ...
	I1119 23:02:09.436213 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:02:09.436498 1081671 addons.go:70] Setting default-storageclass=true in profile "auto-334366"
	I1119 23:02:09.436517 1081671 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-334366"
	I1119 23:02:09.436779 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:02:09.440262 1081671 out.go:179] * Verifying Kubernetes components...
	I1119 23:02:09.445271 1081671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:09.479771 1081671 addons.go:239] Setting addon default-storageclass=true in "auto-334366"
	I1119 23:02:09.479813 1081671 host.go:66] Checking if "auto-334366" exists ...
	I1119 23:02:09.489019 1081671 cli_runner.go:164] Run: docker container inspect auto-334366 --format={{.State.Status}}
	I1119 23:02:09.492103 1081671 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:02:05.563551 1085695 out.go:252] * Restarting existing docker container for "newest-cni-467060" ...
	I1119 23:02:05.563635 1085695 cli_runner.go:164] Run: docker start newest-cni-467060
	I1119 23:02:05.875739 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:05.898836 1085695 kic.go:430] container "newest-cni-467060" state is running.
	I1119 23:02:05.899338 1085695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:02:05.922799 1085695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/config.json ...
	I1119 23:02:05.923086 1085695 machine.go:94] provisionDockerMachine start ...
	I1119 23:02:05.923158 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:05.949847 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:05.950165 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:05.950177 1085695 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:02:05.951273 1085695 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 23:02:09.098808 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-467060
	
	I1119 23:02:09.098898 1085695 ubuntu.go:182] provisioning hostname "newest-cni-467060"
	I1119 23:02:09.099003 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:09.132077 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:09.132383 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:09.132394 1085695 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-467060 && echo "newest-cni-467060" | sudo tee /etc/hostname
	I1119 23:02:09.324237 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-467060
	
	I1119 23:02:09.324366 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:09.352560 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:09.352870 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:09.352887 1085695 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-467060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-467060/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-467060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:02:09.547384 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:02:09.547412 1085695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-860325/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-860325/.minikube}
	I1119 23:02:09.547439 1085695 ubuntu.go:190] setting up certificates
	I1119 23:02:09.547450 1085695 provision.go:84] configureAuth start
	I1119 23:02:09.547515 1085695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:02:09.593488 1085695 provision.go:143] copyHostCerts
	I1119 23:02:09.593557 1085695 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem, removing ...
	I1119 23:02:09.593567 1085695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem
	I1119 23:02:09.593643 1085695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/ca.pem (1078 bytes)
	I1119 23:02:09.593745 1085695 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem, removing ...
	I1119 23:02:09.593751 1085695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem
	I1119 23:02:09.593776 1085695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/cert.pem (1123 bytes)
	I1119 23:02:09.593825 1085695 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem, removing ...
	I1119 23:02:09.593829 1085695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem
	I1119 23:02:09.593851 1085695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-860325/.minikube/key.pem (1679 bytes)
	I1119 23:02:09.593895 1085695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem org=jenkins.newest-cni-467060 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-467060]
	I1119 23:02:09.499024 1081671 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:09.499052 1081671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:02:09.499120 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:02:09.523187 1081671 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:09.523208 1081671 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:02:09.523277 1081671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-334366
	I1119 23:02:09.556572 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:02:09.579017 1081671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33886 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/auto-334366/id_rsa Username:docker}
	I1119 23:02:09.886021 1081671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 23:02:09.926322 1081671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:02:10.219228 1081671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:10.222859 1081671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:10.913355 1081671 node_ready.go:35] waiting up to 15m0s for node "auto-334366" to be "Ready" ...
	I1119 23:02:10.915279 1081671 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.029223441s)
	I1119 23:02:10.915303 1081671 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 23:02:11.419570 1081671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.196600383s)
	I1119 23:02:11.422698 1081671 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 23:02:10.616217 1085695 provision.go:177] copyRemoteCerts
	I1119 23:02:10.616331 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:02:10.616393 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:10.647257 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:10.765168 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:02:10.795556 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 23:02:10.823430 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 23:02:10.850533 1085695 provision.go:87] duration metric: took 1.303066268s to configureAuth
	I1119 23:02:10.850562 1085695 ubuntu.go:206] setting minikube options for container-runtime
	I1119 23:02:10.850817 1085695 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:10.851027 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:10.883132 1085695 main.go:143] libmachine: Using SSH client type: native
	I1119 23:02:10.883495 1085695 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33891 <nil> <nil>}
	I1119 23:02:10.883517 1085695 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:02:11.333185 1085695 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:02:11.333214 1085695 machine.go:97] duration metric: took 5.410107444s to provisionDockerMachine
	I1119 23:02:11.333234 1085695 start.go:293] postStartSetup for "newest-cni-467060" (driver="docker")
	I1119 23:02:11.333244 1085695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:02:11.333345 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:02:11.333420 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.359899 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.471820 1085695 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:02:11.475270 1085695 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 23:02:11.475302 1085695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 23:02:11.475318 1085695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/addons for local assets ...
	I1119 23:02:11.475375 1085695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-860325/.minikube/files for local assets ...
	I1119 23:02:11.475455 1085695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem -> 8621752.pem in /etc/ssl/certs
	I1119 23:02:11.475560 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:02:11.484510 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:02:11.503362 1085695 start.go:296] duration metric: took 170.112595ms for postStartSetup
	I1119 23:02:11.503507 1085695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:02:11.503569 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.521682 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.620197 1085695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 23:02:11.625134 1085695 fix.go:56] duration metric: took 6.08442339s for fixHost
	I1119 23:02:11.625161 1085695 start.go:83] releasing machines lock for "newest-cni-467060", held for 6.084487743s
	I1119 23:02:11.625228 1085695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-467060
	I1119 23:02:11.644287 1085695 ssh_runner.go:195] Run: cat /version.json
	I1119 23:02:11.644337 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.644378 1085695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:02:11.644443 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:11.666540 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.680293 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:11.869001 1085695 ssh_runner.go:195] Run: systemctl --version
	I1119 23:02:11.875880 1085695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:02:11.923918 1085695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:02:11.928479 1085695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:02:11.928572 1085695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:02:11.936557 1085695 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 23:02:11.936584 1085695 start.go:496] detecting cgroup driver to use...
	I1119 23:02:11.936644 1085695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 23:02:11.936718 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:02:11.953254 1085695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:02:11.967605 1085695 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:02:11.967668 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:02:11.983090 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:02:11.996582 1085695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:02:12.113552 1085695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:02:12.236173 1085695 docker.go:234] disabling docker service ...
	I1119 23:02:12.236244 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:02:12.251280 1085695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:02:12.265234 1085695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:02:12.383634 1085695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:02:12.511714 1085695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:02:12.525931 1085695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:02:12.545379 1085695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:02:12.545517 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.554501 1085695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:02:12.554681 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.565217 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.576945 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.586216 1085695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:02:12.594473 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.605441 1085695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.613971 1085695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:02:12.624423 1085695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:02:12.633463 1085695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:02:12.641744 1085695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:12.759313 1085695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:02:12.930885 1085695 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:02:12.930953 1085695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:02:12.936176 1085695 start.go:564] Will wait 60s for crictl version
	I1119 23:02:12.936251 1085695 ssh_runner.go:195] Run: which crictl
	I1119 23:02:12.947332 1085695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 23:02:12.979421 1085695 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 23:02:12.979596 1085695 ssh_runner.go:195] Run: crio --version
	I1119 23:02:13.018184 1085695 ssh_runner.go:195] Run: crio --version
	I1119 23:02:13.053655 1085695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 23:02:13.056467 1085695 cli_runner.go:164] Run: docker network inspect newest-cni-467060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 23:02:13.073455 1085695 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 23:02:13.077537 1085695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:02:13.090406 1085695 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 23:02:11.423161 1081671 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-334366" context rescaled to 1 replicas
	I1119 23:02:11.425518 1081671 addons.go:515] duration metric: took 1.989907587s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1119 23:02:12.916826 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	I1119 23:02:13.093134 1085695 kubeadm.go:884] updating cluster {Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:02:13.093284 1085695 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:02:13.093361 1085695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:02:13.129032 1085695 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:02:13.129057 1085695 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:02:13.129114 1085695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:02:13.154579 1085695 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:02:13.154611 1085695 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:02:13.154620 1085695 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 23:02:13.154731 1085695 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-467060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:02:13.154819 1085695 ssh_runner.go:195] Run: crio config
	I1119 23:02:13.229453 1085695 cni.go:84] Creating CNI manager for ""
	I1119 23:02:13.229475 1085695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 23:02:13.229494 1085695 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 23:02:13.229517 1085695 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-467060 NodeName:newest-cni-467060 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:02:13.229652 1085695 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-467060"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:02:13.229729 1085695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:02:13.237402 1085695 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:02:13.237505 1085695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 23:02:13.245093 1085695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 23:02:13.257796 1085695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:02:13.270831 1085695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1119 23:02:13.283787 1085695 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 23:02:13.287752 1085695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:02:13.297551 1085695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:13.414045 1085695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:02:13.434021 1085695 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060 for IP: 192.168.76.2
	I1119 23:02:13.434045 1085695 certs.go:195] generating shared ca certs ...
	I1119 23:02:13.434063 1085695 certs.go:227] acquiring lock for ca certs: {Name:mkeb1b9a9cc8b89eb238edfbc75392214525edfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:13.434206 1085695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key
	I1119 23:02:13.434259 1085695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key
	I1119 23:02:13.434272 1085695 certs.go:257] generating profile certs ...
	I1119 23:02:13.434364 1085695 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/client.key
	I1119 23:02:13.434433 1085695 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key.08ecb68a
	I1119 23:02:13.434479 1085695 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key
	I1119 23:02:13.434606 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem (1338 bytes)
	W1119 23:02:13.434642 1085695 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175_empty.pem, impossibly tiny 0 bytes
	I1119 23:02:13.434652 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 23:02:13.434677 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:02:13.434702 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:02:13.434727 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/certs/key.pem (1679 bytes)
	I1119 23:02:13.434778 1085695 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem (1708 bytes)
	I1119 23:02:13.435373 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:02:13.462248 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:02:13.482580 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:02:13.505226 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 23:02:13.527191 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 23:02:13.557603 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 23:02:13.583070 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:02:13.604899 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/newest-cni-467060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 23:02:13.633874 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/ssl/certs/8621752.pem --> /usr/share/ca-certificates/8621752.pem (1708 bytes)
	I1119 23:02:13.655474 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:02:13.676933 1085695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-860325/.minikube/certs/862175.pem --> /usr/share/ca-certificates/862175.pem (1338 bytes)
	I1119 23:02:13.698128 1085695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:02:13.712113 1085695 ssh_runner.go:195] Run: openssl version
	I1119 23:02:13.718979 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:02:13.728036 1085695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:02:13.732448 1085695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:49 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:02:13.732559 1085695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:02:13.776860 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:02:13.785182 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862175.pem && ln -fs /usr/share/ca-certificates/862175.pem /etc/ssl/certs/862175.pem"
	I1119 23:02:13.793734 1085695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862175.pem
	I1119 23:02:13.797824 1085695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:56 /usr/share/ca-certificates/862175.pem
	I1119 23:02:13.797937 1085695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862175.pem
	I1119 23:02:13.839622 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/862175.pem /etc/ssl/certs/51391683.0"
	I1119 23:02:13.847894 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8621752.pem && ln -fs /usr/share/ca-certificates/8621752.pem /etc/ssl/certs/8621752.pem"
	I1119 23:02:13.856572 1085695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8621752.pem
	I1119 23:02:13.860641 1085695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:56 /usr/share/ca-certificates/8621752.pem
	I1119 23:02:13.860705 1085695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8621752.pem
	I1119 23:02:13.907116 1085695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8621752.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:02:13.917177 1085695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:02:13.921256 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:02:13.962747 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:02:14.004124 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:02:14.049214 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:02:14.112868 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:02:14.203663 1085695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:02:14.257570 1085695 kubeadm.go:401] StartCluster: {Name:newest-cni-467060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-467060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:02:14.257720 1085695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:02:14.257839 1085695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:02:14.368093 1085695 cri.go:89] found id: "7aeae19d9745132d85cc5b537dab425758029fa33dd54f6c38b607d98deec3a2"
	I1119 23:02:14.368158 1085695 cri.go:89] found id: "ab32007c8b4af3d4c93ad3fb609cc56c5abd60cfeae54766dca8e9a38558d9ac"
	I1119 23:02:14.368179 1085695 cri.go:89] found id: "991708e4be3467882305f88186093de4312c73bd49f4d0b79a263a75f28b5484"
	I1119 23:02:14.368203 1085695 cri.go:89] found id: "55e2d76de39f6e5865a599e8ecf478e4e67e1dd969e8aad0e1d82e17fb6e08ee"
	I1119 23:02:14.368229 1085695 cri.go:89] found id: ""
	I1119 23:02:14.368315 1085695 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 23:02:14.394336 1085695 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T23:02:14Z" level=error msg="open /run/runc: no such file or directory"
	I1119 23:02:14.394495 1085695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:02:14.409801 1085695 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:02:14.409865 1085695 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:02:14.409949 1085695 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:02:14.429012 1085695 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:02:14.429637 1085695 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-467060" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:14.429941 1085695 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-860325/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-467060" cluster setting kubeconfig missing "newest-cni-467060" context setting]
	I1119 23:02:14.430470 1085695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:14.432129 1085695 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:02:14.448950 1085695 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 23:02:14.449026 1085695 kubeadm.go:602] duration metric: took 39.140085ms to restartPrimaryControlPlane
	I1119 23:02:14.449052 1085695 kubeadm.go:403] duration metric: took 191.491291ms to StartCluster
	I1119 23:02:14.449106 1085695 settings.go:142] acquiring lock: {Name:mkc001ebb6127d4ee06d762996d41414c88a8759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:14.449193 1085695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 23:02:14.450071 1085695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-860325/kubeconfig: {Name:mk773a2df6eac9df8832d5503c220f58c0074cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:02:14.450347 1085695 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:02:14.450714 1085695 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:02:14.450810 1085695 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-467060"
	I1119 23:02:14.450827 1085695 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-467060"
	W1119 23:02:14.450839 1085695 addons.go:248] addon storage-provisioner should already be in state true
	I1119 23:02:14.450850 1085695 addons.go:70] Setting dashboard=true in profile "newest-cni-467060"
	I1119 23:02:14.450882 1085695 addons.go:70] Setting default-storageclass=true in profile "newest-cni-467060"
	I1119 23:02:14.450906 1085695 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-467060"
	I1119 23:02:14.450951 1085695 addons.go:239] Setting addon dashboard=true in "newest-cni-467060"
	W1119 23:02:14.451070 1085695 addons.go:248] addon dashboard should already be in state true
	I1119 23:02:14.451127 1085695 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:14.451211 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.451752 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.450878 1085695 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:14.455457 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.450785 1085695 config.go:182] Loaded profile config "newest-cni-467060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:02:14.462185 1085695 out.go:179] * Verifying Kubernetes components...
	I1119 23:02:14.470994 1085695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:02:14.507056 1085695 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 23:02:14.507509 1085695 addons.go:239] Setting addon default-storageclass=true in "newest-cni-467060"
	W1119 23:02:14.507532 1085695 addons.go:248] addon default-storageclass should already be in state true
	I1119 23:02:14.507560 1085695 host.go:66] Checking if "newest-cni-467060" exists ...
	I1119 23:02:14.508011 1085695 cli_runner.go:164] Run: docker container inspect newest-cni-467060 --format={{.State.Status}}
	I1119 23:02:14.518696 1085695 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 23:02:14.526986 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 23:02:14.527021 1085695 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 23:02:14.527095 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:14.544744 1085695 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:02:14.547990 1085695 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:14.548015 1085695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:02:14.548082 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:14.573961 1085695 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:14.573990 1085695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:02:14.574053 1085695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-467060
	I1119 23:02:14.590951 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:14.596431 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:14.624660 1085695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/newest-cni-467060/id_rsa Username:docker}
	I1119 23:02:14.855336 1085695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:02:14.872467 1085695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:02:14.956267 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 23:02:14.956293 1085695 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 23:02:14.987598 1085695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:02:15.084860 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 23:02:15.084887 1085695 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 23:02:15.162081 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 23:02:15.162104 1085695 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 23:02:15.186625 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 23:02:15.186648 1085695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 23:02:15.204893 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 23:02:15.204919 1085695 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 23:02:15.222746 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 23:02:15.222770 1085695 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 23:02:15.250432 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 23:02:15.250455 1085695 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 23:02:15.269921 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 23:02:15.269945 1085695 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 23:02:15.288629 1085695 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 23:02:15.288654 1085695 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W1119 23:02:15.417117 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	W1119 23:02:17.917256 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	I1119 23:02:15.310711 1085695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 23:02:20.632619 1085695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.777246074s)
	I1119 23:02:20.632662 1085695 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.760173058s)
	I1119 23:02:20.632693 1085695 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:02:20.632755 1085695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:02:20.632822 1085695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.645199962s)
	I1119 23:02:20.633167 1085695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.322423457s)
	I1119 23:02:20.636259 1085695 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-467060 addons enable metrics-server
	
	I1119 23:02:20.665253 1085695 api_server.go:72] duration metric: took 6.21483797s to wait for apiserver process to appear ...
	I1119 23:02:20.665280 1085695 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:02:20.665298 1085695 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:02:20.674132 1085695 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:02:20.674162 1085695 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:02:20.678507 1085695 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 23:02:20.681404 1085695 addons.go:515] duration metric: took 6.230678418s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 23:02:21.165695 1085695 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 23:02:21.174739 1085695 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 23:02:21.175797 1085695 api_server.go:141] control plane version: v1.34.1
	I1119 23:02:21.175828 1085695 api_server.go:131] duration metric: took 510.540035ms to wait for apiserver health ...
	I1119 23:02:21.175838 1085695 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:02:21.179324 1085695 system_pods.go:59] 8 kube-system pods found
	I1119 23:02:21.179371 1085695 system_pods.go:61] "coredns-66bc5c9577-8xn65" [ef6f99cd-44ff-4adf-bb68-7328e6d5178e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 23:02:21.179387 1085695 system_pods.go:61] "etcd-newest-cni-467060" [3ab9e3f1-b893-4477-b3e5-5d3f99a18ea0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:02:21.179396 1085695 system_pods.go:61] "kindnet-4sgcn" [eeb9b480-cec1-4be0-a705-e73199a83c5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 23:02:21.179411 1085695 system_pods.go:61] "kube-apiserver-newest-cni-467060" [8d6e3d44-2b2e-49ce-b2e7-0e21ed742414] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:02:21.179428 1085695 system_pods.go:61] "kube-controller-manager-newest-cni-467060" [5b42fbd4-53ad-4282-8760-3d28be1b3a9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:02:21.179436 1085695 system_pods.go:61] "kube-proxy-ldb2r" [cdecedd2-bfb5-4826-be33-924e26a05b88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 23:02:21.179446 1085695 system_pods.go:61] "kube-scheduler-newest-cni-467060" [91153281-645c-4b6e-9408-1d3946edc224] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:02:21.179454 1085695 system_pods.go:61] "storage-provisioner" [0fb6f975-1da8-41b4-91f4-240a5daf116b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 23:02:21.179461 1085695 system_pods.go:74] duration metric: took 3.616244ms to wait for pod list to return data ...
	I1119 23:02:21.179474 1085695 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:02:21.182114 1085695 default_sa.go:45] found service account: "default"
	I1119 23:02:21.182143 1085695 default_sa.go:55] duration metric: took 2.662152ms for default service account to be created ...
	I1119 23:02:21.182157 1085695 kubeadm.go:587] duration metric: took 6.731755873s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 23:02:21.182173 1085695 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:02:21.185048 1085695 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 23:02:21.185092 1085695 node_conditions.go:123] node cpu capacity is 2
	I1119 23:02:21.185106 1085695 node_conditions.go:105] duration metric: took 2.928033ms to run NodePressure ...
	I1119 23:02:21.185119 1085695 start.go:242] waiting for startup goroutines ...
	I1119 23:02:21.185127 1085695 start.go:247] waiting for cluster config update ...
	I1119 23:02:21.185141 1085695 start.go:256] writing updated cluster config ...
	I1119 23:02:21.185533 1085695 ssh_runner.go:195] Run: rm -f paused
	I1119 23:02:21.269578 1085695 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 23:02:21.273332 1085695 out.go:179] * Done! kubectl is now configured to use "newest-cni-467060" cluster and "default" namespace by default
	W1119 23:02:20.416734 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	W1119 23:02:22.917590 1081671 node_ready.go:57] node "auto-334366" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.859176725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.867207932Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d459774a-aff5-4e1d-82ab-d4b8344385e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.867961227Z" level=info msg="Running pod sandbox: kube-system/kindnet-4sgcn/POD" id=72b82bdf-f91a-4fef-80bd-af4804a8b439 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.868018795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.878525248Z" level=info msg="Ran pod sandbox b83a25c67a98daec850fa23fe8ba8954ba035448063725cabf422456f24eb215 with infra container: kube-system/kube-proxy-ldb2r/POD" id=d459774a-aff5-4e1d-82ab-d4b8344385e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.881391595Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=72b82bdf-f91a-4fef-80bd-af4804a8b439 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.882230051Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2da69901-dfdc-4e9a-825f-8237f6685897 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.883383653Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=794f91cb-c310-4af9-be3b-4362a8d85f58 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.887710894Z" level=info msg="Creating container: kube-system/kube-proxy-ldb2r/kube-proxy" id=f63b506e-a5eb-460e-818a-e88f75f869fe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.887868073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.897641827Z" level=info msg="Ran pod sandbox cc7467a903f188f27a1d37854a6c5f25afc91496335bf1f8a9332b2c16138252 with infra container: kube-system/kindnet-4sgcn/POD" id=72b82bdf-f91a-4fef-80bd-af4804a8b439 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.906102363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.90662103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.907779981Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8213d64c-1440-4f5b-8895-20a896957d00 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.909012255Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8894f35e-16f3-46c3-8187-87d13ab80b7b name=/runtime.v1.ImageService/ImageStatus
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.910207202Z" level=info msg="Creating container: kube-system/kindnet-4sgcn/kindnet-cni" id=5e5919b5-dde9-44a7-b716-35b7a4affbd2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.910312377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.91781663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.918523023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.94981176Z" level=info msg="Created container 02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3: kube-system/kube-proxy-ldb2r/kube-proxy" id=f63b506e-a5eb-460e-818a-e88f75f869fe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.950930071Z" level=info msg="Starting container: 02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3" id=c6f1a871-4aa8-414a-b06a-205f0f3e563c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.952669204Z" level=info msg="Created container 999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18: kube-system/kindnet-4sgcn/kindnet-cni" id=5e5919b5-dde9-44a7-b716-35b7a4affbd2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.953915065Z" level=info msg="Starting container: 999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18" id=17f1f556-6d7b-45a5-a807-3c1b64f3954a name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.955803803Z" level=info msg="Started container" PID=1070 containerID=999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18 description=kube-system/kindnet-4sgcn/kindnet-cni id=17f1f556-6d7b-45a5-a807-3c1b64f3954a name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc7467a903f188f27a1d37854a6c5f25afc91496335bf1f8a9332b2c16138252
	Nov 19 23:02:20 newest-cni-467060 crio[618]: time="2025-11-19T23:02:20.956322725Z" level=info msg="Started container" PID=1069 containerID=02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3 description=kube-system/kube-proxy-ldb2r/kube-proxy id=c6f1a871-4aa8-414a-b06a-205f0f3e563c name=/runtime.v1.RuntimeService/StartContainer sandboxID=b83a25c67a98daec850fa23fe8ba8954ba035448063725cabf422456f24eb215
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	999a7516f7fc3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   cc7467a903f18       kindnet-4sgcn                               kube-system
	02d180d011beb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   b83a25c67a98d       kube-proxy-ldb2r                            kube-system
	7aeae19d97451       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   eb7f311076cee       kube-controller-manager-newest-cni-467060   kube-system
	ab32007c8b4af       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   252366b5c63cf       kube-scheduler-newest-cni-467060            kube-system
	991708e4be346       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   e32aed6b3904a       kube-apiserver-newest-cni-467060            kube-system
	55e2d76de39f6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   7867a0d72f14c       etcd-newest-cni-467060                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-467060
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-467060
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=newest-cni-467060
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T23_01_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 23:01:48 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-467060
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:02:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 23:02:19 +0000   Wed, 19 Nov 2025 23:01:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-467060
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de5c7cc592a67801eaa2fbe691dd049
	  System UUID:                ff8061df-c1de-4a74-aef4-0d63b13a9d04
	  Boot ID:                    4a39780a-394d-4c55-890e-cf7f2ba9c261
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-467060                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-4sgcn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-467060             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-467060    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-ldb2r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-467060             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           32s                node-controller  Node newest-cni-467060 event: Registered Node newest-cni-467060 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-467060 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-467060 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-467060 event: Registered Node newest-cni-467060 in Controller
	
	
	==> dmesg <==
	[Nov19 22:38] overlayfs: idmapped layers are currently not supported
	[ +16.198332] overlayfs: idmapped layers are currently not supported
	[Nov19 22:39] overlayfs: idmapped layers are currently not supported
	[Nov19 22:40] overlayfs: idmapped layers are currently not supported
	[Nov19 22:41] overlayfs: idmapped layers are currently not supported
	[Nov19 22:42] overlayfs: idmapped layers are currently not supported
	[Nov19 22:44] overlayfs: idmapped layers are currently not supported
	[Nov19 22:46] overlayfs: idmapped layers are currently not supported
	[ +32.512602] overlayfs: idmapped layers are currently not supported
	[Nov19 22:48] overlayfs: idmapped layers are currently not supported
	[Nov19 22:50] overlayfs: idmapped layers are currently not supported
	[Nov19 22:51] overlayfs: idmapped layers are currently not supported
	[ +38.342820] overlayfs: idmapped layers are currently not supported
	[Nov19 22:54] overlayfs: idmapped layers are currently not supported
	[Nov19 22:55] overlayfs: idmapped layers are currently not supported
	[  +4.178785] overlayfs: idmapped layers are currently not supported
	[Nov19 22:56] overlayfs: idmapped layers are currently not supported
	[Nov19 22:57] overlayfs: idmapped layers are currently not supported
	[Nov19 22:58] overlayfs: idmapped layers are currently not supported
	[ +17.118892] overlayfs: idmapped layers are currently not supported
	[Nov19 23:00] overlayfs: idmapped layers are currently not supported
	[ +12.401560] overlayfs: idmapped layers are currently not supported
	[Nov19 23:01] overlayfs: idmapped layers are currently not supported
	[ +13.188823] overlayfs: idmapped layers are currently not supported
	[Nov19 23:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [55e2d76de39f6e5865a599e8ecf478e4e67e1dd969e8aad0e1d82e17fb6e08ee] <==
	{"level":"warn","ts":"2025-11-19T23:02:18.009276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.046707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.078007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.146212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.159918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.189176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.223877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.248394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.270565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.297704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.336659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.358229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.377760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.392772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.409823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.429574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.452971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.486201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.528091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.542118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.585663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.591082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.606498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.627215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:02:18.686307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47316","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:02:27 up  4:44,  0 user,  load average: 5.15, 3.71, 2.85
	Linux newest-cni-467060 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [999a7516f7fc36a89a533be4fdf765e99e2f16ffa4cee145e933e8bf7fce2c18] <==
	I1119 23:02:21.120838       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 23:02:21.121811       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 23:02:21.122060       1 main.go:148] setting mtu 1500 for CNI 
	I1119 23:02:21.122127       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 23:02:21.122192       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T23:02:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 23:02:21.320601       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 23:02:21.320620       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 23:02:21.320628       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 23:02:21.320938       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [991708e4be3467882305f88186093de4312c73bd49f4d0b79a263a75f28b5484] <==
	I1119 23:02:19.570624       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 23:02:19.571357       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:02:19.571392       1 policy_source.go:240] refreshing policies
	I1119 23:02:19.585371       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:02:19.629497       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:02:19.648675       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:02:19.652600       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 23:02:19.652874       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 23:02:19.661527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:02:19.663410       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 23:02:19.663521       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:02:19.702514       1 cache.go:39] Caches are synced for autoregister controller
	E1119 23:02:19.716915       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 23:02:20.277216       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 23:02:20.316535       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:02:20.345326       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:02:20.355941       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:02:20.364323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:02:20.375331       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 23:02:20.461207       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.120.173"}
	I1119 23:02:20.493394       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.2.231"}
	I1119 23:02:23.111563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:02:23.357933       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 23:02:23.508254       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:02:23.561687       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7aeae19d9745132d85cc5b537dab425758029fa33dd54f6c38b607d98deec3a2] <==
	I1119 23:02:22.989114       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 23:02:22.992415       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 23:02:22.993624       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 23:02:23.001311       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 23:02:23.001529       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:02:23.001698       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:02:23.001909       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-467060"
	I1119 23:02:23.002138       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 23:02:23.012813       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:02:23.012912       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 23:02:23.012924       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 23:02:23.012935       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 23:02:23.012944       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 23:02:23.012954       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 23:02:23.012962       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 23:02:23.012978       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:02:23.016009       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:02:23.016174       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:02:23.017225       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 23:02:23.023568       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:02:23.034419       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 23:02:23.035010       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 23:02:23.037229       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:02:23.037244       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 23:02:23.057726       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [02d180d011beb5d893dc37f8e31f1d9b953ad62c994252583717bfa97056fca3] <==
	I1119 23:02:21.014173       1 server_linux.go:53] "Using iptables proxy"
	I1119 23:02:21.100403       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:02:21.201124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:02:21.201226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 23:02:21.201364       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:02:21.237866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 23:02:21.237983       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:02:21.241842       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:02:21.242215       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:02:21.242410       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:02:21.244344       1 config.go:200] "Starting service config controller"
	I1119 23:02:21.244410       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:02:21.244468       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:02:21.244497       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:02:21.244532       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:02:21.244558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:02:21.245267       1 config.go:309] "Starting node config controller"
	I1119 23:02:21.245336       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:02:21.245370       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:02:21.345322       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:02:21.345368       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:02:21.345399       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ab32007c8b4af3d4c93ad3fb609cc56c5abd60cfeae54766dca8e9a38558d9ac] <==
	I1119 23:02:17.264071       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:02:19.453598       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 23:02:19.453637       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 23:02:19.453651       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:02:19.453659       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:02:19.573250       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:02:19.578519       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:02:19.584691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:02:19.589996       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:02:19.590213       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:02:19.590650       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:02:19.691112       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.644418     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-467060\" already exists" pod="kube-system/kube-apiserver-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.644464     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.651345     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-467060\" already exists" pod="kube-system/etcd-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.669457     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-467060\" already exists" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.669650     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.691657     737 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.691764     737 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.691796     737 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.695105     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.719428     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-467060\" already exists" pod="kube-system/kube-scheduler-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.719463     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.745602     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-467060\" already exists" pod="kube-system/etcd-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: I1119 23:02:19.933667     737 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:19 newest-cni-467060 kubelet[737]: E1119 23:02:19.944809     737 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-467060\" already exists" pod="kube-system/kube-controller-manager-newest-cni-467060"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.548268     737 apiserver.go:52] "Watching apiserver"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.584920     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585161     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdecedd2-bfb5-4826-be33-924e26a05b88-lib-modules\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585197     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-lib-modules\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-cni-cfg\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585263     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdecedd2-bfb5-4826-be33-924e26a05b88-xtables-lock\") pod \"kube-proxy-ldb2r\" (UID: \"cdecedd2-bfb5-4826-be33-924e26a05b88\") " pod="kube-system/kube-proxy-ldb2r"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.585288     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeb9b480-cec1-4be0-a705-e73199a83c5d-xtables-lock\") pod \"kindnet-4sgcn\" (UID: \"eeb9b480-cec1-4be0-a705-e73199a83c5d\") " pod="kube-system/kindnet-4sgcn"
	Nov 19 23:02:20 newest-cni-467060 kubelet[737]: I1119 23:02:20.614455     737 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 23:02:22 newest-cni-467060 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 23:02:22 newest-cni-467060 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 23:02:22 newest-cni-467060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-467060 -n newest-cni-467060
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-467060 -n newest-cni-467060: exit status 2 (384.083621ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-467060 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl: exit status 1 (104.868768ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-8xn65" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-6j6k8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-x5bpl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-467060 describe pod coredns-66bc5c9577-8xn65 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6j6k8 kubernetes-dashboard-855c9754f9-x5bpl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.11s)
E1119 23:08:24.296722  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 39.53
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 38.84
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 176.24
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.77
48 TestAddons/StoppedEnableDisable 12.45
49 TestCertOptions 36.94
50 TestCertExpiration 238.75
52 TestForceSystemdFlag 38.49
53 TestForceSystemdEnv 40.94
58 TestErrorSpam/setup 36.07
59 TestErrorSpam/start 0.91
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 5.37
62 TestErrorSpam/unpause 5.07
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.7
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.5
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
75 TestFunctional/serial/CacheCmd/cache/add_local 1.08
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 315.86
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.53
86 TestFunctional/serial/LogsFileCmd 1.58
87 TestFunctional/serial/InvalidService 4.63
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 11.39
91 TestFunctional/parallel/DryRun 0.65
92 TestFunctional/parallel/InternationalLanguage 0.28
93 TestFunctional/parallel/StatusCmd 1.4
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 23.51
101 TestFunctional/parallel/SSHCmd 0.83
102 TestFunctional/parallel/CpCmd 2.1
104 TestFunctional/parallel/FileSync 0.38
105 TestFunctional/parallel/CertSync 1.92
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
113 TestFunctional/parallel/License 0.42
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.38
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 7.77
130 TestFunctional/parallel/MountCmd/specific-port 2.1
131 TestFunctional/parallel/ServiceCmd/List 0.68
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.39
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
144 TestFunctional/parallel/ImageCommands/Setup 0.65
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 207.29
163 TestMultiControlPlane/serial/DeployApp 6.83
164 TestMultiControlPlane/serial/PingHostFromPods 1.52
165 TestMultiControlPlane/serial/AddWorkerNode 60.21
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
168 TestMultiControlPlane/serial/CopyFile 19.79
169 TestMultiControlPlane/serial/StopSecondaryNode 12.9
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
171 TestMultiControlPlane/serial/RestartSecondaryNode 29.67
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.24
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 132
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.84
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
176 TestMultiControlPlane/serial/StopCluster 36.13
177 TestMultiControlPlane/serial/RestartCluster 58.75
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
179 TestMultiControlPlane/serial/AddSecondaryNode 84.72
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 79.48
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 66.6
211 TestKicCustomNetwork/use_default_bridge_network 38.92
212 TestKicExistingNetwork 38.65
213 TestKicCustomSubnet 38.98
214 TestKicStaticIP 39.51
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.69
219 TestMountStart/serial/StartWithMountFirst 9
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.56
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.2
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 139.55
231 TestMultiNode/serial/DeployApp2Nodes 5.14
232 TestMultiNode/serial/PingHostFrom2Pods 0.89
233 TestMultiNode/serial/AddNode 59.73
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.48
237 TestMultiNode/serial/StopNode 2.42
238 TestMultiNode/serial/StartAfterStop 8.14
239 TestMultiNode/serial/RestartKeepsNodes 73.61
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 24.04
242 TestMultiNode/serial/RestartMultiNode 52.02
243 TestMultiNode/serial/ValidateNameConflict 35.71
248 TestPreload 126.26
250 TestScheduledStopUnix 106.58
253 TestInsufficientStorage 14.93
254 TestRunningBinaryUpgrade 52.72
256 TestKubernetesUpgrade 518.27
257 TestMissingContainerUpgrade 116.5
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 43.39
261 TestNoKubernetes/serial/StartWithStopK8s 49.76
262 TestNoKubernetes/serial/Start 5.94
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
265 TestNoKubernetes/serial/ProfileList 1.29
266 TestNoKubernetes/serial/Stop 1.39
267 TestNoKubernetes/serial/StartNoArgs 8.45
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
269 TestStoppedBinaryUpgrade/Setup 3.51
270 TestStoppedBinaryUpgrade/Upgrade 66.01
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
280 TestPause/serial/Start 81.76
281 TestPause/serial/SecondStartNoReconfiguration 27.87
290 TestNetworkPlugins/group/false 3.91
295 TestStartStop/group/old-k8s-version/serial/FirstStart 74.36
297 TestStartStop/group/no-preload/serial/FirstStart 69.66
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.53
300 TestStartStop/group/old-k8s-version/serial/Stop 12.08
301 TestStartStop/group/no-preload/serial/DeployApp 9.31
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
303 TestStartStop/group/old-k8s-version/serial/SecondStart 54.55
305 TestStartStop/group/no-preload/serial/Stop 12.26
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 52.53
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/embed-certs/serial/FirstStart 90.05
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.16
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.17
320 TestStartStop/group/embed-certs/serial/DeployApp 10.33
322 TestStartStop/group/embed-certs/serial/Stop 12.16
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.29
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
327 TestStartStop/group/embed-certs/serial/SecondStart 52.47
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.25
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
337 TestStartStop/group/newest-cni/serial/FirstStart 48.18
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
340 TestNetworkPlugins/group/auto/Start 91.22
341 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/Stop 1.6
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
345 TestStartStop/group/newest-cni/serial/SecondStart 16.55
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
350 TestNetworkPlugins/group/calico/Start 85.34
351 TestNetworkPlugins/group/auto/KubeletFlags 0.4
352 TestNetworkPlugins/group/auto/NetCatPod 14.34
353 TestNetworkPlugins/group/auto/DNS 0.16
354 TestNetworkPlugins/group/auto/Localhost 0.15
355 TestNetworkPlugins/group/auto/HairPin 0.17
356 TestNetworkPlugins/group/custom-flannel/Start 66.88
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.42
359 TestNetworkPlugins/group/calico/NetCatPod 12.41
360 TestNetworkPlugins/group/calico/DNS 0.24
361 TestNetworkPlugins/group/calico/Localhost 0.19
362 TestNetworkPlugins/group/calico/HairPin 0.15
363 TestNetworkPlugins/group/kindnet/Start 87.16
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
366 TestNetworkPlugins/group/custom-flannel/DNS 0.21
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
369 TestNetworkPlugins/group/flannel/Start 57.17
370 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
371 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
372 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/kindnet/DNS 0.16
375 TestNetworkPlugins/group/kindnet/Localhost 0.13
376 TestNetworkPlugins/group/kindnet/HairPin 0.15
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
378 TestNetworkPlugins/group/flannel/NetCatPod 9.27
379 TestNetworkPlugins/group/flannel/DNS 0.16
380 TestNetworkPlugins/group/flannel/Localhost 0.15
381 TestNetworkPlugins/group/flannel/HairPin 0.16
382 TestNetworkPlugins/group/enable-default-cni/Start 62.87
383 TestNetworkPlugins/group/bridge/Start 85.09
384 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
385 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
386 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
387 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
388 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (39.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-914845 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-914845 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (39.526009117s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (39.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 21:48:16.658365  862175 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1119 21:48:16.658442  862175 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-914845
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-914845: exit status 85 (83.319272ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-914845 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-914845 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:37.176609  862180 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:37.176829  862180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:37.176857  862180 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:37.176876  862180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:37.177500  862180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	W1119 21:47:37.177703  862180 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21918-860325/.minikube/config/config.json: open /home/jenkins/minikube-integration/21918-860325/.minikube/config/config.json: no such file or directory
	I1119 21:47:37.178177  862180 out.go:368] Setting JSON to true
	I1119 21:47:37.179076  862180 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12586,"bootTime":1763576271,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 21:47:37.179170  862180 start.go:143] virtualization:  
	I1119 21:47:37.183265  862180 out.go:99] [download-only-914845] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1119 21:47:37.183437  862180 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 21:47:37.183487  862180 notify.go:221] Checking for updates...
	I1119 21:47:37.186372  862180 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:47:37.189300  862180 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:37.192208  862180 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 21:47:37.195303  862180 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 21:47:37.198118  862180 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1119 21:47:37.203814  862180 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:47:37.204091  862180 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:37.235004  862180 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:47:37.235150  862180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:37.295000  862180 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-19 21:47:37.285265849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:47:37.295109  862180 docker.go:319] overlay module found
	I1119 21:47:37.298082  862180 out.go:99] Using the docker driver based on user configuration
	I1119 21:47:37.298120  862180 start.go:309] selected driver: docker
	I1119 21:47:37.298127  862180 start.go:930] validating driver "docker" against <nil>
	I1119 21:47:37.298235  862180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:37.356216  862180 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-19 21:47:37.346900565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:47:37.356370  862180 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:37.356668  862180 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1119 21:47:37.356819  862180 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:47:37.359862  862180 out.go:171] Using Docker driver with root privileges
	I1119 21:47:37.362811  862180 cni.go:84] Creating CNI manager for ""
	I1119 21:47:37.362901  862180 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:47:37.362911  862180 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:47:37.362986  862180 start.go:353] cluster config:
	{Name:download-only-914845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-914845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:47:37.365984  862180 out.go:99] Starting "download-only-914845" primary control-plane node in "download-only-914845" cluster
	I1119 21:47:37.366008  862180 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:47:37.368820  862180 out.go:99] Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:47:37.368884  862180 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 21:47:37.369004  862180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:47:37.385658  862180 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:37.385861  862180 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:47:37.385973  862180 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:37.429039  862180 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1119 21:47:37.429066  862180 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:37.429235  862180 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 21:47:37.432604  862180 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 21:47:37.432634  862180 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1119 21:47:37.520160  862180 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1119 21:47:37.520291  862180 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1119 21:47:42.733472  862180 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	
	
	* The control-plane node download-only-914845 host does not exist
	  To start a cluster, run: "minikube start -p download-only-914845"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-914845
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (38.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-667855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-667855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.835236147s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (38.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 21:48:55.926270  862175 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1119 21:48:55.926304  862175 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-667855
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-667855: exit status 85 (87.765827ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-914845 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-914845 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ delete  │ -p download-only-914845                                                                                                                                                   │ download-only-914845 │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │ 19 Nov 25 21:48 UTC │
	│ start   │ -o=json --download-only -p download-only-667855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-667855 │ jenkins │ v1.37.0 │ 19 Nov 25 21:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:48:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:48:17.137074  862380 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:48:17.137252  862380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:17.137266  862380 out.go:374] Setting ErrFile to fd 2...
	I1119 21:48:17.137272  862380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:48:17.137540  862380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 21:48:17.137980  862380 out.go:368] Setting JSON to true
	I1119 21:48:17.138819  862380 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12626,"bootTime":1763576271,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 21:48:17.138913  862380 start.go:143] virtualization:  
	I1119 21:48:17.142346  862380 out.go:99] [download-only-667855] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 21:48:17.142647  862380 notify.go:221] Checking for updates...
	I1119 21:48:17.146500  862380 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:48:17.149521  862380 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:48:17.152449  862380 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 21:48:17.155325  862380 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 21:48:17.158130  862380 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1119 21:48:17.163848  862380 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:48:17.164117  862380 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:48:17.186340  862380 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 21:48:17.186448  862380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:17.267165  862380 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:17.252139241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:17.267279  862380 docker.go:319] overlay module found
	I1119 21:48:17.270307  862380 out.go:99] Using the docker driver based on user configuration
	I1119 21:48:17.270366  862380 start.go:309] selected driver: docker
	I1119 21:48:17.270393  862380 start.go:930] validating driver "docker" against <nil>
	I1119 21:48:17.270531  862380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:48:17.324050  862380 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 21:48:17.315288667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 21:48:17.324216  862380 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:48:17.324502  862380 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1119 21:48:17.324657  862380 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:48:17.327722  862380 out.go:171] Using Docker driver with root privileges
	I1119 21:48:17.330492  862380 cni.go:84] Creating CNI manager for ""
	I1119 21:48:17.330563  862380 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:48:17.330577  862380 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:48:17.330662  862380 start.go:353] cluster config:
	{Name:download-only-667855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-667855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:48:17.333555  862380 out.go:99] Starting "download-only-667855" primary control-plane node in "download-only-667855" cluster
	I1119 21:48:17.333574  862380 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:48:17.336345  862380 out.go:99] Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:48:17.336392  862380 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:48:17.336474  862380 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:48:17.352568  862380 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:48:17.352704  862380 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:48:17.352725  862380 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory, skipping pull
	I1119 21:48:17.352730  862380 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in cache, skipping pull
	I1119 21:48:17.352738  862380 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	I1119 21:48:17.402496  862380 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 21:48:17.402523  862380 cache.go:65] Caching tarball of preloaded images
	I1119 21:48:17.402688  862380 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:48:17.405764  862380 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1119 21:48:17.405786  862380 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1119 21:48:17.494825  862380 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1119 21:48:17.494894  862380 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21918-860325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-667855 host does not exist
	  To start a cluster, run: "minikube start -p download-only-667855"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-667855
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 21:48:57.107380  862175 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-835231 --alsologtostderr --binary-mirror http://127.0.0.1:43589 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-835231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-835231
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-441523
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-441523: exit status 85 (80.009243ms)

                                                
                                                
-- stdout --
	* Profile "addons-441523" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-441523"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-441523
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-441523: exit status 85 (73.337476ms)

                                                
                                                
-- stdout --
	* Profile "addons-441523" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-441523"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (176.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-441523 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-441523 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m56.235028532s)
--- PASS: TestAddons/Setup (176.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-441523 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-441523 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.77s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-441523 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-441523 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5d8a1943-d326-43ca-8939-315b8ae9c3a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5d8a1943-d326-43ca-8939-315b8ae9c3a7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003131886s
addons_test.go:694: (dbg) Run:  kubectl --context addons-441523 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-441523 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-441523 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-441523 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-441523
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-441523: (12.159273226s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-441523
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-441523
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-441523
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (36.94s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-110863 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.993712018s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-110863 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-110863 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-110863 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-110863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-110863
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-110863: (2.236854556s)
--- PASS: TestCertOptions (36.94s)

                                                
                                    
x
+
TestCertExpiration (238.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1119 22:51:44.295067  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:51:54.908718  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-943214 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.815987729s)
E1119 22:53:41.230195  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-943214 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.059509816s)
helpers_test.go:175: Cleaning up "cert-expiration-943214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-943214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-943214: (2.869016835s)
--- PASS: TestCertExpiration (238.75s)

                                                
                                    
x
+
TestForceSystemdFlag (38.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-265514 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-265514 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.551525854s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-265514 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-265514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-265514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-265514: (2.530111262s)
--- PASS: TestForceSystemdFlag (38.49s)

                                                
                                    
x
+
TestForceSystemdEnv (40.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-860026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-860026 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.361869206s)
helpers_test.go:175: Cleaning up "force-systemd-env-860026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-860026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-860026: (2.580924141s)
--- PASS: TestForceSystemdEnv (40.94s)

                                                
                                    
x
+
TestErrorSpam/setup (36.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-909089 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-909089 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-909089 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-909089 --driver=docker  --container-runtime=crio: (36.066581932s)
--- PASS: TestErrorSpam/setup (36.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 start --dry-run
--- PASS: TestErrorSpam/start (0.91s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (5.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause: exit status 80 (1.753410429s)

                                                
                                                
-- stdout --
	* Pausing node nospam-909089 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:56:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause: exit status 80 (1.758443443s)

                                                
                                                
-- stdout --
	* Pausing node nospam-909089 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:56:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause: exit status 80 (1.852350974s)

                                                
                                                
-- stdout --
	* Pausing node nospam-909089 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:56:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.07s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause: exit status 80 (1.87456799s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-909089 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:56:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause: exit status 80 (1.76316348s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-909089 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:56:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause: exit status 80 (1.432664067s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-909089 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:56:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.07s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 stop: (1.308256406s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-909089 --log_dir /tmp/nospam-909089 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21918-860325/.minikube/files/etc/test/nested/copy/862175/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642533 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1119 21:56:54.908718  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:54.916003  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:54.928479  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:54.949824  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:54.991271  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:55.072630  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:55.234057  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:55.555633  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:56.197465  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:56:57.479660  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:57:00.043100  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:57:05.173515  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:57:15.415494  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:57:35.897173  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-642533 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.702419951s)
--- PASS: TestFunctional/serial/StartWithProxy (80.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 21:57:39.851180  862175 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642533 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-642533 --alsologtostderr -v=8: (29.502813776s)
functional_test.go:678: soft start took 29.503354475s for "functional-642533" cluster.
I1119 21:58:09.354273  862175 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-642533 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 cache add registry.k8s.io/pause:3.1: (1.162712814s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 cache add registry.k8s.io/pause:3.3: (1.129164914s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 cache add registry.k8s.io/pause:latest: (1.128710203s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-642533 /tmp/TestFunctionalserialCacheCmdcacheadd_local2204090992/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cache add minikube-local-cache-test:functional-642533
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cache delete minikube-local-cache-test:functional-642533
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-642533
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.810722ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 kubectl -- --context functional-642533 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-642533 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (315.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642533 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1119 21:58:16.859064  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:59:38.784001  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:01:54.908424  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:02:22.632846  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-642533 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (5m15.862393475s)
functional_test.go:776: restart took 5m15.862503843s for "functional-642533" cluster.
I1119 22:03:32.559135  862175 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (315.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-642533 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 logs: (1.532873833s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 logs --file /tmp/TestFunctionalserialLogsFileCmd3865920649/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 logs --file /tmp/TestFunctionalserialLogsFileCmd3865920649/001/logs.txt: (1.578005568s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-642533 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-642533
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-642533: exit status 115 (384.394378ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32529 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-642533 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 config get cpus: exit status 14 (60.406455ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 config get cpus: exit status 14 (74.272213ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-642533 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-642533 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 899830: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-642533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (300.797395ms)

                                                
                                                
-- stdout --
	* [functional-642533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:14:09.864634  899322 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:14:09.864826  899322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:14:09.864839  899322 out.go:374] Setting ErrFile to fd 2...
	I1119 22:14:09.864845  899322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:14:09.865131  899322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:14:09.865542  899322 out.go:368] Setting JSON to false
	I1119 22:14:09.866493  899322 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14179,"bootTime":1763576271,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:14:09.866560  899322 start.go:143] virtualization:  
	I1119 22:14:09.869825  899322 out.go:179] * [functional-642533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:14:09.872663  899322 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:14:09.872742  899322 notify.go:221] Checking for updates...
	I1119 22:14:09.883684  899322 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:14:09.895189  899322 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:14:09.898153  899322 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:14:09.900924  899322 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:14:09.903800  899322 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:14:09.907262  899322 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:14:09.907821  899322 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:14:09.960393  899322 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:14:09.960500  899322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:14:10.084824  899322 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:14:10.071220042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:14:10.084931  899322 docker.go:319] overlay module found
	I1119 22:14:10.088461  899322 out.go:179] * Using the docker driver based on existing profile
	I1119 22:14:10.092132  899322 start.go:309] selected driver: docker
	I1119 22:14:10.092153  899322 start.go:930] validating driver "docker" against &{Name:functional-642533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-642533 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:14:10.092272  899322 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:14:10.095705  899322 out.go:203] 
	W1119 22:14:10.098782  899322 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1119 22:14:10.102187  899322 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642533 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-642533 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (281.515042ms)

                                                
                                                
-- stdout --
	* [functional-642533] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:14:09.603044  899230 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:14:09.603262  899230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:14:09.603290  899230 out.go:374] Setting ErrFile to fd 2...
	I1119 22:14:09.603311  899230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:14:09.604327  899230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:14:09.604866  899230 out.go:368] Setting JSON to false
	I1119 22:14:09.605902  899230 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14178,"bootTime":1763576271,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:14:09.605998  899230 start.go:143] virtualization:  
	I1119 22:14:09.610944  899230 out.go:179] * [functional-642533] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1119 22:14:09.613967  899230 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:14:09.614145  899230 notify.go:221] Checking for updates...
	I1119 22:14:09.620333  899230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:14:09.623274  899230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:14:09.626211  899230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:14:09.628977  899230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:14:09.631843  899230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:14:09.635213  899230 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:14:09.635801  899230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:14:09.675194  899230 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:14:09.675295  899230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:14:09.785273  899230 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 22:14:09.773015028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:14:09.785392  899230 docker.go:319] overlay module found
	I1119 22:14:09.788928  899230 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 22:14:09.791982  899230 start.go:309] selected driver: docker
	I1119 22:14:09.792002  899230 start.go:930] validating driver "docker" against &{Name:functional-642533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-642533 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:14:09.792096  899230 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:14:09.795609  899230 out.go:203] 
	W1119 22:14:09.798405  899230 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 22:14:09.801632  899230 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [99d5d456-455a-46aa-96b0-ce59fcfcd1e7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003559509s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-642533 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-642533 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-642533 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-642533 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [be957601-b7d3-47d3-8c15-0b79c702f7c8] Pending
helpers_test.go:352: "sp-pod" [be957601-b7d3-47d3-8c15-0b79c702f7c8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [be957601-b7d3-47d3-8c15-0b79c702f7c8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003276842s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-642533 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-642533 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-642533 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ffbd58da-5a0d-456b-951e-8c28ad845682] Pending
helpers_test.go:352: "sp-pod" [ffbd58da-5a0d-456b-951e-8c28ad845682] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003789433s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-642533 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh -n functional-642533 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cp functional-642533:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd763102336/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh -n functional-642533 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh -n functional-642533 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/862175/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo cat /etc/test/nested/copy/862175/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/862175.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo cat /etc/ssl/certs/862175.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/862175.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo cat /usr/share/ca-certificates/862175.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo cat /etc/ssl/certs/51391683.0"
2025/11/19 22:14:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8621752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo cat /etc/ssl/certs/8621752.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8621752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo cat /usr/share/ca-certificates/8621752.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-642533 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh "sudo systemctl is-active docker": exit status 1 (319.696957ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh "sudo systemctl is-active containerd": exit status 1 (419.804586ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-642533 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-642533 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-642533 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 895265: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-642533 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-642533 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-642533 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0c0bd455-cca8-42bb-a91b-3fee99c08331] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0c0bd455-cca8-42bb-a91b-3fee99c08331] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003810286s
I1119 22:03:49.841563  862175 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-642533 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.173.105 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-642533 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "379.343034ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.824109ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "363.795418ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.321179ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdany-port2636240785/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763590435863848836" to /tmp/TestFunctionalparallelMountCmdany-port2636240785/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763590435863848836" to /tmp/TestFunctionalparallelMountCmdany-port2636240785/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763590435863848836" to /tmp/TestFunctionalparallelMountCmdany-port2636240785/001/test-1763590435863848836
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.0298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 22:13:56.219185  862175 retry.go:31] will retry after 388.829816ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 19 22:13 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 19 22:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 19 22:13 test-1763590435863848836
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh cat /mount-9p/test-1763590435863848836
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-642533 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [66125156-9f07-4ae6-843f-a9c9f7f4a96e] Pending
helpers_test.go:352: "busybox-mount" [66125156-9f07-4ae6-843f-a9c9f7f4a96e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [66125156-9f07-4ae6-843f-a9c9f7f4a96e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [66125156-9f07-4ae6-843f-a9c9f7f4a96e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003632844s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-642533 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdany-port2636240785/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdspecific-port290674594/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (386.90442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 22:14:04.021917  862175 retry.go:31] will retry after 527.012448ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdspecific-port290674594/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh "sudo umount -f /mount-9p": exit status 1 (379.292563ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-642533 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdspecific-port290674594/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3360595609/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3360595609/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3360595609/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T" /mount1: exit status 1 (709.40871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 22:14:06.450934  862175 retry.go:31] will retry after 507.881114ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-642533 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3360595609/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3360595609/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3360595609/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 service list -o json
functional_test.go:1504: Took "668.475882ms" to run "out/minikube-linux-arm64 -p functional-642533 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642533 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642533 image ls --format short --alsologtostderr:
I1119 22:14:23.987523  901592 out.go:360] Setting OutFile to fd 1 ...
I1119 22:14:23.987683  901592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:23.987694  901592 out.go:374] Setting ErrFile to fd 2...
I1119 22:14:23.987724  901592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:23.987997  901592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
I1119 22:14:23.988612  901592 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:23.988712  901592 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:23.989155  901592 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
I1119 22:14:24.030466  901592 ssh_runner.go:195] Run: systemctl --version
I1119 22:14:24.030524  901592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
I1119 22:14:24.057772  901592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
I1119 22:14:24.165479  901592 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642533 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642533 image ls --format table --alsologtostderr:
I1119 22:14:24.856532  901855 out.go:360] Setting OutFile to fd 1 ...
I1119 22:14:24.856684  901855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.856706  901855 out.go:374] Setting ErrFile to fd 2...
I1119 22:14:24.856713  901855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.857129  901855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
I1119 22:14:24.858114  901855 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.858295  901855 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.859094  901855 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
I1119 22:14:24.880422  901855 ssh_runner.go:195] Run: systemctl --version
I1119 22:14:24.880479  901855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
I1119 22:14:24.900081  901855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
I1119 22:14:25.013610  901855 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642533 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8
c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library
/nginx:alpine"],"size":"54837949"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"
id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d
91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"s
ize":"519884"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642533 image ls --format json --alsologtostderr:
I1119 22:14:24.579735  901767 out.go:360] Setting OutFile to fd 1 ...
I1119 22:14:24.579884  901767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.579890  901767 out.go:374] Setting ErrFile to fd 2...
I1119 22:14:24.579895  901767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.580175  901767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
I1119 22:14:24.580769  901767 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.580875  901767 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.581334  901767 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
I1119 22:14:24.602608  901767 ssh_runner.go:195] Run: systemctl --version
I1119 22:14:24.602662  901767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
I1119 22:14:24.626604  901767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
I1119 22:14:24.730151  901767 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642533 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642533 image ls --format yaml --alsologtostderr:
I1119 22:14:24.283546  901688 out.go:360] Setting OutFile to fd 1 ...
I1119 22:14:24.283750  901688 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.283776  901688 out.go:374] Setting ErrFile to fd 2...
I1119 22:14:24.283796  901688 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.284086  901688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
I1119 22:14:24.284735  901688 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.284896  901688 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.285389  901688 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
I1119 22:14:24.306676  901688 ssh_runner.go:195] Run: systemctl --version
I1119 22:14:24.306726  901688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
I1119 22:14:24.340407  901688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
I1119 22:14:24.447351  901688 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642533 ssh pgrep buildkitd: exit status 1 (365.607201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image build -t localhost/my-image:functional-642533 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-642533 image build -t localhost/my-image:functional-642533 testdata/build --alsologtostderr: (3.369800551s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642533 image build -t localhost/my-image:functional-642533 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1dc9a2e4e3a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-642533
--> 6e3ca782c60
Successfully tagged localhost/my-image:functional-642533
6e3ca782c60c3e24ce55ac3f14d5ef32919e988dc99d226f3424cbe2d529fc6d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642533 image build -t localhost/my-image:functional-642533 testdata/build --alsologtostderr:
I1119 22:14:24.741682  901826 out.go:360] Setting OutFile to fd 1 ...
I1119 22:14:24.742483  901826 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.742514  901826 out.go:374] Setting ErrFile to fd 2...
I1119 22:14:24.742535  901826 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 22:14:24.742979  901826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
I1119 22:14:24.743976  901826 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.745180  901826 config.go:182] Loaded profile config "functional-642533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 22:14:24.745736  901826 cli_runner.go:164] Run: docker container inspect functional-642533 --format={{.State.Status}}
I1119 22:14:24.765992  901826 ssh_runner.go:195] Run: systemctl --version
I1119 22:14:24.766044  901826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642533
I1119 22:14:24.801971  901826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33571 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/functional-642533/id_rsa Username:docker}
I1119 22:14:24.905771  901826 build_images.go:162] Building image from path: /tmp/build.272101610.tar
I1119 22:14:24.905849  901826 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1119 22:14:24.920169  901826 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.272101610.tar
I1119 22:14:24.925270  901826 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.272101610.tar: stat -c "%s %y" /var/lib/minikube/build/build.272101610.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.272101610.tar': No such file or directory
I1119 22:14:24.925297  901826 ssh_runner.go:362] scp /tmp/build.272101610.tar --> /var/lib/minikube/build/build.272101610.tar (3072 bytes)
I1119 22:14:24.952632  901826 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.272101610
I1119 22:14:24.961119  901826 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.272101610 -xf /var/lib/minikube/build/build.272101610.tar
I1119 22:14:24.969616  901826 crio.go:315] Building image: /var/lib/minikube/build/build.272101610
I1119 22:14:24.969693  901826 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-642533 /var/lib/minikube/build/build.272101610 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1119 22:14:28.024112  901826 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-642533 /var/lib/minikube/build/build.272101610 --cgroup-manager=cgroupfs: (3.054388573s)
I1119 22:14:28.024196  901826 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.272101610
I1119 22:14:28.032927  901826 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.272101610.tar
I1119 22:14:28.041660  901826 build_images.go:218] Built localhost/my-image:functional-642533 from /tmp/build.272101610.tar
I1119 22:14:28.041696  901826 build_images.go:134] succeeded building to: functional-642533
I1119 22:14:28.041702  901826 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-642533
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image rm kicbase/echo-server:functional-642533 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-642533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-642533
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-642533
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-642533
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1119 22:16:54.909850  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m26.404293415s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 kubectl -- rollout status deployment/busybox: (4.135218105s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-8x85d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-pcsft -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-r86kg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-8x85d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-pcsft -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-r86kg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-8x85d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-pcsft -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-r86kg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-8x85d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-8x85d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-pcsft -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-pcsft -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-r86kg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 kubectl -- exec busybox-7b57f96db7-r86kg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 node add --alsologtostderr -v 5
E1119 22:18:41.228891  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:41.235530  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:41.246827  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:41.268341  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:41.309809  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:41.391240  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:41.552674  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:41.874971  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:42.516936  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:43.798422  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:46.360369  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:18:51.482307  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:19:01.724056  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 node add --alsologtostderr -v 5: (59.129036654s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5: (1.082017609s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-693024 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.069057783s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 status --output json --alsologtostderr -v 5: (1.025940937s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp testdata/cp-test.txt ha-693024:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283609945/001/cp-test_ha-693024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024:/home/docker/cp-test.txt ha-693024-m02:/home/docker/cp-test_ha-693024_ha-693024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test_ha-693024_ha-693024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024:/home/docker/cp-test.txt ha-693024-m03:/home/docker/cp-test_ha-693024_ha-693024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test_ha-693024_ha-693024-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024:/home/docker/cp-test.txt ha-693024-m04:/home/docker/cp-test_ha-693024_ha-693024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test_ha-693024_ha-693024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp testdata/cp-test.txt ha-693024-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283609945/001/cp-test_ha-693024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m02:/home/docker/cp-test.txt ha-693024:/home/docker/cp-test_ha-693024-m02_ha-693024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test_ha-693024-m02_ha-693024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m02:/home/docker/cp-test.txt ha-693024-m03:/home/docker/cp-test_ha-693024-m02_ha-693024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test_ha-693024-m02_ha-693024-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m02:/home/docker/cp-test.txt ha-693024-m04:/home/docker/cp-test_ha-693024-m02_ha-693024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test_ha-693024-m02_ha-693024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp testdata/cp-test.txt ha-693024-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283609945/001/cp-test_ha-693024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m03:/home/docker/cp-test.txt ha-693024:/home/docker/cp-test_ha-693024-m03_ha-693024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test_ha-693024-m03_ha-693024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m03:/home/docker/cp-test.txt ha-693024-m02:/home/docker/cp-test_ha-693024-m03_ha-693024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test_ha-693024-m03_ha-693024-m02.txt"
E1119 22:19:22.206096  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m03:/home/docker/cp-test.txt ha-693024-m04:/home/docker/cp-test_ha-693024-m03_ha-693024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test_ha-693024-m03_ha-693024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp testdata/cp-test.txt ha-693024-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283609945/001/cp-test_ha-693024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m04:/home/docker/cp-test.txt ha-693024:/home/docker/cp-test_ha-693024-m04_ha-693024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024 "sudo cat /home/docker/cp-test_ha-693024-m04_ha-693024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m04:/home/docker/cp-test.txt ha-693024-m02:/home/docker/cp-test_ha-693024-m04_ha-693024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m02 "sudo cat /home/docker/cp-test_ha-693024-m04_ha-693024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 cp ha-693024-m04:/home/docker/cp-test.txt ha-693024-m03:/home/docker/cp-test_ha-693024-m04_ha-693024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 ssh -n ha-693024-m03 "sudo cat /home/docker/cp-test_ha-693024-m04_ha-693024-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 node stop m02 --alsologtostderr -v 5: (12.088690427s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5: exit status 7 (808.053125ms)

                                                
                                                
-- stdout --
	ha-693024
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-693024-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-693024-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-693024-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:19:40.102531  916768 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:19:40.102758  916768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:40.102765  916768 out.go:374] Setting ErrFile to fd 2...
	I1119 22:19:40.102770  916768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:19:40.103114  916768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:19:40.103325  916768 out.go:368] Setting JSON to false
	I1119 22:19:40.103353  916768 mustload.go:66] Loading cluster: ha-693024
	I1119 22:19:40.103779  916768 config.go:182] Loaded profile config "ha-693024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:19:40.103791  916768 status.go:174] checking status of ha-693024 ...
	I1119 22:19:40.104358  916768 cli_runner.go:164] Run: docker container inspect ha-693024 --format={{.State.Status}}
	I1119 22:19:40.104722  916768 notify.go:221] Checking for updates...
	I1119 22:19:40.128152  916768 status.go:371] ha-693024 host status = "Running" (err=<nil>)
	I1119 22:19:40.128175  916768 host.go:66] Checking if "ha-693024" exists ...
	I1119 22:19:40.128505  916768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-693024
	I1119 22:19:40.171093  916768 host.go:66] Checking if "ha-693024" exists ...
	I1119 22:19:40.171646  916768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:19:40.171751  916768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-693024
	I1119 22:19:40.191049  916768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33576 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/ha-693024/id_rsa Username:docker}
	I1119 22:19:40.292481  916768 ssh_runner.go:195] Run: systemctl --version
	I1119 22:19:40.299012  916768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:19:40.312306  916768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:19:40.373818  916768 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-19 22:19:40.363687503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:19:40.374433  916768 kubeconfig.go:125] found "ha-693024" server: "https://192.168.49.254:8443"
	I1119 22:19:40.374461  916768 api_server.go:166] Checking apiserver status ...
	I1119 22:19:40.374510  916768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:19:40.392501  916768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup
	I1119 22:19:40.402191  916768 api_server.go:182] apiserver freezer: "9:freezer:/docker/84f9d1685801dd69fda19a1cfc084df1ad68c69c9936ac9272dceb4d3ad045e3/crio/crio-0cbfdfe247ad36dd9826aa116247a6a795db6c253f753310f27815cb79b7605d"
	I1119 22:19:40.402290  916768 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/84f9d1685801dd69fda19a1cfc084df1ad68c69c9936ac9272dceb4d3ad045e3/crio/crio-0cbfdfe247ad36dd9826aa116247a6a795db6c253f753310f27815cb79b7605d/freezer.state
	I1119 22:19:40.411198  916768 api_server.go:204] freezer state: "THAWED"
	I1119 22:19:40.411278  916768 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 22:19:40.421453  916768 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 22:19:40.421485  916768 status.go:463] ha-693024 apiserver status = Running (err=<nil>)
	I1119 22:19:40.421497  916768 status.go:176] ha-693024 status: &{Name:ha-693024 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:19:40.421514  916768 status.go:174] checking status of ha-693024-m02 ...
	I1119 22:19:40.421862  916768 cli_runner.go:164] Run: docker container inspect ha-693024-m02 --format={{.State.Status}}
	I1119 22:19:40.440398  916768 status.go:371] ha-693024-m02 host status = "Stopped" (err=<nil>)
	I1119 22:19:40.440419  916768 status.go:384] host is not running, skipping remaining checks
	I1119 22:19:40.440426  916768 status.go:176] ha-693024-m02 status: &{Name:ha-693024-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:19:40.440446  916768 status.go:174] checking status of ha-693024-m03 ...
	I1119 22:19:40.440748  916768 cli_runner.go:164] Run: docker container inspect ha-693024-m03 --format={{.State.Status}}
	I1119 22:19:40.459700  916768 status.go:371] ha-693024-m03 host status = "Running" (err=<nil>)
	I1119 22:19:40.459724  916768 host.go:66] Checking if "ha-693024-m03" exists ...
	I1119 22:19:40.460036  916768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-693024-m03
	I1119 22:19:40.478672  916768 host.go:66] Checking if "ha-693024-m03" exists ...
	I1119 22:19:40.479159  916768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:19:40.479205  916768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-693024-m03
	I1119 22:19:40.501498  916768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33586 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/ha-693024-m03/id_rsa Username:docker}
	I1119 22:19:40.619358  916768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:19:40.634774  916768 kubeconfig.go:125] found "ha-693024" server: "https://192.168.49.254:8443"
	I1119 22:19:40.634803  916768 api_server.go:166] Checking apiserver status ...
	I1119 22:19:40.634958  916768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:19:40.646665  916768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1194/cgroup
	I1119 22:19:40.655178  916768 api_server.go:182] apiserver freezer: "9:freezer:/docker/998eb56ebb20871d78952e3b6ed9b38b8a2c1ac6ce6595bfa7ff572253e1ab9a/crio/crio-1339f0f2aa143f00405e524f760c9de937457a2d807b85f27db38b735b280918"
	I1119 22:19:40.655284  916768 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/998eb56ebb20871d78952e3b6ed9b38b8a2c1ac6ce6595bfa7ff572253e1ab9a/crio/crio-1339f0f2aa143f00405e524f760c9de937457a2d807b85f27db38b735b280918/freezer.state
	I1119 22:19:40.663248  916768 api_server.go:204] freezer state: "THAWED"
	I1119 22:19:40.663291  916768 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 22:19:40.671855  916768 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 22:19:40.671883  916768 status.go:463] ha-693024-m03 apiserver status = Running (err=<nil>)
	I1119 22:19:40.671893  916768 status.go:176] ha-693024-m03 status: &{Name:ha-693024-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:19:40.671934  916768 status.go:174] checking status of ha-693024-m04 ...
	I1119 22:19:40.672268  916768 cli_runner.go:164] Run: docker container inspect ha-693024-m04 --format={{.State.Status}}
	I1119 22:19:40.690218  916768 status.go:371] ha-693024-m04 host status = "Running" (err=<nil>)
	I1119 22:19:40.690246  916768 host.go:66] Checking if "ha-693024-m04" exists ...
	I1119 22:19:40.690556  916768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-693024-m04
	I1119 22:19:40.707710  916768 host.go:66] Checking if "ha-693024-m04" exists ...
	I1119 22:19:40.708019  916768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:19:40.708070  916768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-693024-m04
	I1119 22:19:40.726244  916768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33591 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/ha-693024-m04/id_rsa Username:docker}
	I1119 22:19:40.828144  916768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:19:40.841446  916768 status.go:176] ha-693024-m04 status: &{Name:ha-693024-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 node start m02 --alsologtostderr -v 5
E1119 22:20:03.167650  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 node start m02 --alsologtostderr -v 5: (28.316313104s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5: (1.234028379s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.23609661s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (132s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 stop --alsologtostderr -v 5: (37.843480166s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 start --wait true --alsologtostderr -v 5
E1119 22:21:25.089521  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:21:54.908346  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 start --wait true --alsologtostderr -v 5: (1m33.951942677s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 node delete m03 --alsologtostderr -v 5: (10.866491455s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 stop --alsologtostderr -v 5: (36.014949066s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5: exit status 7 (116.011887ms)

                                                
                                                
-- stdout --
	ha-693024
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-693024-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-693024-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:23:13.260664  928785 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:23:13.260791  928785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:13.260802  928785 out.go:374] Setting ErrFile to fd 2...
	I1119 22:23:13.260806  928785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:13.261069  928785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:23:13.261254  928785 out.go:368] Setting JSON to false
	I1119 22:23:13.261281  928785 mustload.go:66] Loading cluster: ha-693024
	I1119 22:23:13.261678  928785 config.go:182] Loaded profile config "ha-693024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:23:13.261697  928785 status.go:174] checking status of ha-693024 ...
	I1119 22:23:13.262194  928785 cli_runner.go:164] Run: docker container inspect ha-693024 --format={{.State.Status}}
	I1119 22:23:13.262456  928785 notify.go:221] Checking for updates...
	I1119 22:23:13.281335  928785 status.go:371] ha-693024 host status = "Stopped" (err=<nil>)
	I1119 22:23:13.281361  928785 status.go:384] host is not running, skipping remaining checks
	I1119 22:23:13.281369  928785 status.go:176] ha-693024 status: &{Name:ha-693024 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:23:13.281399  928785 status.go:174] checking status of ha-693024-m02 ...
	I1119 22:23:13.281709  928785 cli_runner.go:164] Run: docker container inspect ha-693024-m02 --format={{.State.Status}}
	I1119 22:23:13.299199  928785 status.go:371] ha-693024-m02 host status = "Stopped" (err=<nil>)
	I1119 22:23:13.299224  928785 status.go:384] host is not running, skipping remaining checks
	I1119 22:23:13.299232  928785 status.go:176] ha-693024-m02 status: &{Name:ha-693024-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:23:13.299264  928785 status.go:174] checking status of ha-693024-m04 ...
	I1119 22:23:13.299570  928785 cli_runner.go:164] Run: docker container inspect ha-693024-m04 --format={{.State.Status}}
	I1119 22:23:13.323511  928785 status.go:371] ha-693024-m04 host status = "Stopped" (err=<nil>)
	I1119 22:23:13.323536  928785 status.go:384] host is not running, skipping remaining checks
	I1119 22:23:13.323550  928785 status.go:176] ha-693024-m04 status: &{Name:ha-693024-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1119 22:23:41.231045  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:24:08.931554  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.750985282s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 node add --control-plane --alsologtostderr -v 5: (1m23.659661426s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-693024 status --alsologtostderr -v 5: (1.063820737s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.095968051s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-727330 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1119 22:26:54.908652  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-727330 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.479029965s)
--- PASS: TestJSONOutput/start/Command (79.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-727330 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-727330 --output=json --user=testUser: (5.859701516s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-158542 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-158542 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.47657ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"730bc84a-2b0f-415a-a54d-31b907533221","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-158542] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8bdf3040-48f8-4fd3-ba51-5a49ca455080","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"d2fe6478-b914-4930-94df-37c7480d4305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fe3629b1-6310-4a8d-9b5a-0ff0adf8cb3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig"}}
	{"specversion":"1.0","id":"b9ec94d5-a8f0-4820-bee2-72efecdc12c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube"}}
	{"specversion":"1.0","id":"57c0821b-849c-43a5-a82b-baa664702877","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7f78906c-8139-433b-93ca-1c3dc7a5d02b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"29b75e52-8cd8-4d23-81a9-864f6cee2681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-158542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-158542
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (66.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-748890 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-748890 --network=: (1m4.304445773s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-748890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-748890
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-748890: (2.263651254s)
--- PASS: TestKicCustomNetwork/create_custom_network (66.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-257084 --network=bridge
E1119 22:28:41.232006  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-257084 --network=bridge: (36.727895355s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-257084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-257084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-257084: (2.171160851s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.92s)

                                                
                                    
x
+
TestKicExistingNetwork (38.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1119 22:29:06.178475  862175 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 22:29:06.195500  862175 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 22:29:06.196395  862175 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1119 22:29:06.196444  862175 cli_runner.go:164] Run: docker network inspect existing-network
W1119 22:29:06.213143  862175 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1119 22:29:06.213177  862175 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1119 22:29:06.213198  862175 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1119 22:29:06.213306  862175 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 22:29:06.239264  862175 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91cf836446ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:87:e1:c7:0d:56} reservation:<nil>}
I1119 22:29:06.239601  862175 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001465560}
I1119 22:29:06.239628  862175 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1119 22:29:06.239682  862175 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1119 22:29:06.297668  862175 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-696169 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-696169 --network=existing-network: (36.330742972s)
helpers_test.go:175: Cleaning up "existing-network-696169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-696169
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-696169: (2.160833228s)
I1119 22:29:44.805468  862175 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.65s)

                                                
                                    
x
+
TestKicCustomSubnet (38.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-627499 --subnet=192.168.60.0/24
E1119 22:29:57.999011  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-627499 --subnet=192.168.60.0/24: (36.708822127s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-627499 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-627499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-627499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-627499: (2.242384468s)
--- PASS: TestKicCustomSubnet (38.98s)

                                                
                                    
x
+
TestKicStaticIP (39.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-582798 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-582798 --static-ip=192.168.200.200: (37.139520584s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-582798 ip
helpers_test.go:175: Cleaning up "static-ip-582798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-582798
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-582798: (2.195611763s)
--- PASS: TestKicStaticIP (39.51s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-880602 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-880602 --driver=docker  --container-runtime=crio: (33.249214281s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-883400 --driver=docker  --container-runtime=crio
E1119 22:31:54.908338  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-883400 --driver=docker  --container-runtime=crio: (34.839530711s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-880602
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-883400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-883400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-883400
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-883400: (2.079285407s)
helpers_test.go:175: Cleaning up "first-880602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-880602
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-880602: (2.06966445s)
--- PASS: TestMinikubeProfile (73.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-862647 --memory=3072 --mount-string /tmp/TestMountStartserial1191930562/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-862647 --memory=3072 --mount-string /tmp/TestMountStartserial1191930562/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.995239186s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-862647 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-864616 --memory=3072 --mount-string /tmp/TestMountStartserial1191930562/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-864616 --memory=3072 --mount-string /tmp/TestMountStartserial1191930562/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.560070146s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-864616 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-862647 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-862647 --alsologtostderr -v=5: (1.702707825s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-864616 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-864616
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-864616: (1.284445943s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-864616
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-864616: (7.197659648s)
--- PASS: TestMountStart/serial/RestartStopped (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-864616 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-744734 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1119 22:33:41.229444  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:35:04.293151  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-744734 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.002064802s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-744734 -- rollout status deployment/busybox: (3.363291474s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-f9ccr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-ttlvn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-f9ccr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-ttlvn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-f9ccr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-ttlvn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-f9ccr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-f9ccr -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-ttlvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-744734 -- exec busybox-7b57f96db7-ttlvn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-744734 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-744734 -v=5 --alsologtostderr: (59.056789322s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-744734 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp testdata/cp-test.txt multinode-744734:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile915751895/001/cp-test_multinode-744734.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734:/home/docker/cp-test.txt multinode-744734-m02:/home/docker/cp-test_multinode-744734_multinode-744734-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m02 "sudo cat /home/docker/cp-test_multinode-744734_multinode-744734-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734:/home/docker/cp-test.txt multinode-744734-m03:/home/docker/cp-test_multinode-744734_multinode-744734-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m03 "sudo cat /home/docker/cp-test_multinode-744734_multinode-744734-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp testdata/cp-test.txt multinode-744734-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile915751895/001/cp-test_multinode-744734-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734-m02:/home/docker/cp-test.txt multinode-744734:/home/docker/cp-test_multinode-744734-m02_multinode-744734.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734 "sudo cat /home/docker/cp-test_multinode-744734-m02_multinode-744734.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734-m02:/home/docker/cp-test.txt multinode-744734-m03:/home/docker/cp-test_multinode-744734-m02_multinode-744734-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m03 "sudo cat /home/docker/cp-test_multinode-744734-m02_multinode-744734-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp testdata/cp-test.txt multinode-744734-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile915751895/001/cp-test_multinode-744734-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734-m03:/home/docker/cp-test.txt multinode-744734:/home/docker/cp-test_multinode-744734-m03_multinode-744734.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734 "sudo cat /home/docker/cp-test_multinode-744734-m03_multinode-744734.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 cp multinode-744734-m03:/home/docker/cp-test.txt multinode-744734-m02:/home/docker/cp-test_multinode-744734-m03_multinode-744734-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 ssh -n multinode-744734-m02 "sudo cat /home/docker/cp-test_multinode-744734-m03_multinode-744734-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-744734 node stop m03: (1.314350692s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-744734 status: exit status 7 (579.160234ms)

                                                
                                                
-- stdout --
	multinode-744734
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-744734-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-744734-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr: exit status 7 (529.563473ms)

                                                
                                                
-- stdout --
	multinode-744734
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-744734-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-744734-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:36:27.350017  979104 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:36:27.350124  979104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:36:27.350134  979104 out.go:374] Setting ErrFile to fd 2...
	I1119 22:36:27.350139  979104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:36:27.350488  979104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:36:27.350706  979104 out.go:368] Setting JSON to false
	I1119 22:36:27.350732  979104 mustload.go:66] Loading cluster: multinode-744734
	I1119 22:36:27.351466  979104 config.go:182] Loaded profile config "multinode-744734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:27.351493  979104 status.go:174] checking status of multinode-744734 ...
	I1119 22:36:27.351475  979104 notify.go:221] Checking for updates...
	I1119 22:36:27.352036  979104 cli_runner.go:164] Run: docker container inspect multinode-744734 --format={{.State.Status}}
	I1119 22:36:27.371813  979104 status.go:371] multinode-744734 host status = "Running" (err=<nil>)
	I1119 22:36:27.371840  979104 host.go:66] Checking if "multinode-744734" exists ...
	I1119 22:36:27.372222  979104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-744734
	I1119 22:36:27.397576  979104 host.go:66] Checking if "multinode-744734" exists ...
	I1119 22:36:27.397918  979104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:36:27.397970  979104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-744734
	I1119 22:36:27.416678  979104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33696 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/multinode-744734/id_rsa Username:docker}
	I1119 22:36:27.520852  979104 ssh_runner.go:195] Run: systemctl --version
	I1119 22:36:27.527754  979104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:36:27.541286  979104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:36:27.602241  979104 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 22:36:27.592860632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:36:27.602792  979104 kubeconfig.go:125] found "multinode-744734" server: "https://192.168.67.2:8443"
	I1119 22:36:27.602839  979104 api_server.go:166] Checking apiserver status ...
	I1119 22:36:27.602960  979104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:36:27.614537  979104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1119 22:36:27.622965  979104 api_server.go:182] apiserver freezer: "9:freezer:/docker/c5943623b571efd8f62ac727d2fcd1fabc88f614bd8568d37d8d26ab8a890309/crio/crio-e20bce99d594b875ab29c5871c911ea83c54da34187df6be589a2b6593161e4e"
	I1119 22:36:27.623036  979104 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c5943623b571efd8f62ac727d2fcd1fabc88f614bd8568d37d8d26ab8a890309/crio/crio-e20bce99d594b875ab29c5871c911ea83c54da34187df6be589a2b6593161e4e/freezer.state
	I1119 22:36:27.630976  979104 api_server.go:204] freezer state: "THAWED"
	I1119 22:36:27.631006  979104 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1119 22:36:27.639774  979104 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1119 22:36:27.639799  979104 status.go:463] multinode-744734 apiserver status = Running (err=<nil>)
	I1119 22:36:27.639810  979104 status.go:176] multinode-744734 status: &{Name:multinode-744734 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:36:27.639828  979104 status.go:174] checking status of multinode-744734-m02 ...
	I1119 22:36:27.640142  979104 cli_runner.go:164] Run: docker container inspect multinode-744734-m02 --format={{.State.Status}}
	I1119 22:36:27.657014  979104 status.go:371] multinode-744734-m02 host status = "Running" (err=<nil>)
	I1119 22:36:27.657039  979104 host.go:66] Checking if "multinode-744734-m02" exists ...
	I1119 22:36:27.657349  979104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-744734-m02
	I1119 22:36:27.674336  979104 host.go:66] Checking if "multinode-744734-m02" exists ...
	I1119 22:36:27.674648  979104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:36:27.674698  979104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-744734-m02
	I1119 22:36:27.692164  979104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33701 SSHKeyPath:/home/jenkins/minikube-integration/21918-860325/.minikube/machines/multinode-744734-m02/id_rsa Username:docker}
	I1119 22:36:27.792128  979104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:36:27.805434  979104 status.go:176] multinode-744734-m02 status: &{Name:multinode-744734-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:36:27.805472  979104 status.go:174] checking status of multinode-744734-m03 ...
	I1119 22:36:27.805789  979104 cli_runner.go:164] Run: docker container inspect multinode-744734-m03 --format={{.State.Status}}
	I1119 22:36:27.822302  979104 status.go:371] multinode-744734-m03 host status = "Stopped" (err=<nil>)
	I1119 22:36:27.822325  979104 status.go:384] host is not running, skipping remaining checks
	I1119 22:36:27.822333  979104 status.go:176] multinode-744734-m03 status: &{Name:multinode-744734-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-744734 node start m03 -v=5 --alsologtostderr: (7.344912702s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-744734
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-744734
E1119 22:36:54.910186  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-744734: (25.123595593s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-744734 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-744734 --wait=true -v=5 --alsologtostderr: (48.354303325s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-744734
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-744734 node delete m03: (4.987486871s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-744734 stop: (23.852401413s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-744734 status: exit status 7 (101.853028ms)

                                                
                                                
-- stdout --
	multinode-744734
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-744734-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr: exit status 7 (87.33991ms)

                                                
                                                
-- stdout --
	multinode-744734
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-744734-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:38:19.259822  986934 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:38:19.259949  986934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:19.259961  986934 out.go:374] Setting ErrFile to fd 2...
	I1119 22:38:19.259967  986934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:38:19.260228  986934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:38:19.260444  986934 out.go:368] Setting JSON to false
	I1119 22:38:19.260477  986934 mustload.go:66] Loading cluster: multinode-744734
	I1119 22:38:19.260566  986934 notify.go:221] Checking for updates...
	I1119 22:38:19.260868  986934 config.go:182] Loaded profile config "multinode-744734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:38:19.260886  986934 status.go:174] checking status of multinode-744734 ...
	I1119 22:38:19.261408  986934 cli_runner.go:164] Run: docker container inspect multinode-744734 --format={{.State.Status}}
	I1119 22:38:19.281061  986934 status.go:371] multinode-744734 host status = "Stopped" (err=<nil>)
	I1119 22:38:19.281086  986934 status.go:384] host is not running, skipping remaining checks
	I1119 22:38:19.281093  986934 status.go:176] multinode-744734 status: &{Name:multinode-744734 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:38:19.281118  986934 status.go:174] checking status of multinode-744734-m02 ...
	I1119 22:38:19.281444  986934 cli_runner.go:164] Run: docker container inspect multinode-744734-m02 --format={{.State.Status}}
	I1119 22:38:19.297749  986934 status.go:371] multinode-744734-m02 host status = "Stopped" (err=<nil>)
	I1119 22:38:19.297770  986934 status.go:384] host is not running, skipping remaining checks
	I1119 22:38:19.297777  986934 status.go:176] multinode-744734-m02 status: &{Name:multinode-744734-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-744734 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1119 22:38:41.229475  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-744734 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.325537782s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-744734 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-744734
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-744734-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-744734-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.015781ms)

                                                
                                                
-- stdout --
	* [multinode-744734-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-744734-m02' is duplicated with machine name 'multinode-744734-m02' in profile 'multinode-744734'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-744734-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-744734-m03 --driver=docker  --container-runtime=crio: (32.950116493s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-744734
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-744734: exit status 80 (338.861408ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-744734 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-744734-m03 already exists in multinode-744734-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-744734-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-744734-m03: (2.269117695s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.71s)

                                                
                                    
x
+
TestPreload (126.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-176049 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-176049 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.217096094s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-176049 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-176049 image pull gcr.io/k8s-minikube/busybox: (2.106085306s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-176049
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-176049: (5.926082089s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-176049 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1119 22:41:54.908379  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-176049 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.322389535s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-176049 image list
helpers_test.go:175: Cleaning up "test-preload-176049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-176049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-176049: (2.445329146s)
--- PASS: TestPreload (126.26s)

                                                
                                    
x
+
TestScheduledStopUnix (106.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-334536 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-334536 --memory=3072 --driver=docker  --container-runtime=crio: (30.019391133s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-334536 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:42:27.743943 1000954 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:42:27.744142 1000954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:27.744168 1000954 out.go:374] Setting ErrFile to fd 2...
	I1119 22:42:27.744188 1000954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:27.744456 1000954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:42:27.744780 1000954 out.go:368] Setting JSON to false
	I1119 22:42:27.744935 1000954 mustload.go:66] Loading cluster: scheduled-stop-334536
	I1119 22:42:27.745315 1000954 config.go:182] Loaded profile config "scheduled-stop-334536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:42:27.745407 1000954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/config.json ...
	I1119 22:42:27.745619 1000954 mustload.go:66] Loading cluster: scheduled-stop-334536
	I1119 22:42:27.745779 1000954 config.go:182] Loaded profile config "scheduled-stop-334536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-334536 -n scheduled-stop-334536
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-334536 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:42:28.196942 1001044 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:42:28.197117 1001044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:28.197149 1001044 out.go:374] Setting ErrFile to fd 2...
	I1119 22:42:28.197168 1001044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:28.197456 1001044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:42:28.197786 1001044 out.go:368] Setting JSON to false
	I1119 22:42:28.200640 1001044 daemonize_unix.go:73] killing process 1000970 as it is an old scheduled stop
	I1119 22:42:28.201358 1001044 mustload.go:66] Loading cluster: scheduled-stop-334536
	I1119 22:42:28.201882 1001044 config.go:182] Loaded profile config "scheduled-stop-334536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:42:28.202018 1001044 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/config.json ...
	I1119 22:42:28.202251 1001044 mustload.go:66] Loading cluster: scheduled-stop-334536
	I1119 22:42:28.202410 1001044 config.go:182] Loaded profile config "scheduled-stop-334536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 22:42:28.211683  862175 retry.go:31] will retry after 144.964µs: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.212777  862175 retry.go:31] will retry after 109.782µs: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.213869  862175 retry.go:31] will retry after 246.654µs: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.215001  862175 retry.go:31] will retry after 276.968µs: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.216129  862175 retry.go:31] will retry after 259.841µs: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.217249  862175 retry.go:31] will retry after 945.518µs: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.218369  862175 retry.go:31] will retry after 1.611586ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.220510  862175 retry.go:31] will retry after 2.18664ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.223755  862175 retry.go:31] will retry after 2.022903ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.225888  862175 retry.go:31] will retry after 2.447079ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.230711  862175 retry.go:31] will retry after 7.600522ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.239099  862175 retry.go:31] will retry after 10.836339ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.250460  862175 retry.go:31] will retry after 12.816338ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.263893  862175 retry.go:31] will retry after 28.329077ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
I1119 22:42:28.293095  862175 retry.go:31] will retry after 32.085612ms: open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-334536 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-334536 -n scheduled-stop-334536
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-334536
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-334536 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:42:54.148050 1001405 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:42:54.148237 1001405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:54.148263 1001405 out.go:374] Setting ErrFile to fd 2...
	I1119 22:42:54.148283 1001405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:42:54.148574 1001405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:42:54.148883 1001405 out.go:368] Setting JSON to false
	I1119 22:42:54.149025 1001405 mustload.go:66] Loading cluster: scheduled-stop-334536
	I1119 22:42:54.149415 1001405 config.go:182] Loaded profile config "scheduled-stop-334536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:42:54.149511 1001405 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/scheduled-stop-334536/config.json ...
	I1119 22:42:54.149751 1001405 mustload.go:66] Loading cluster: scheduled-stop-334536
	I1119 22:42:54.149911 1001405 config.go:182] Loaded profile config "scheduled-stop-334536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-334536
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-334536: exit status 7 (73.394645ms)

                                                
                                                
-- stdout --
	scheduled-stop-334536
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-334536 -n scheduled-stop-334536
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-334536 -n scheduled-stop-334536: exit status 7 (65.922466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-334536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-334536
E1119 22:43:41.229503  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-334536: (4.953324646s)
--- PASS: TestScheduledStopUnix (106.58s)

                                                
                                    
x
+
TestInsufficientStorage (14.93s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-050736 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-050736 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (12.354384343s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"af72b214-b23e-4be1-975c-89518d067ab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-050736] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9088eabb-c232-461c-99cd-25ede013ed85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"ab2018d3-2eed-4040-8bc7-099a0a621b76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b9ef668-49a0-4599-a562-f1e362f8c1a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig"}}
	{"specversion":"1.0","id":"b60bd016-3b55-476c-842c-31c64a242156","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube"}}
	{"specversion":"1.0","id":"a11e74ca-7f13-429b-9186-b09be1f3ccf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e7aa3980-af0c-4673-9bfe-6be207f8f8e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6720336e-0289-44d6-81d3-0383f705d8a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b57475cd-7fc2-41fa-ba60-f3de429bb66a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c0767a09-1d0b-4b29-a36c-3291f9df09c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"07af8f43-de49-41f8-9208-4b2cdce7f832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fe2c9899-38e0-4059-932d-955bf82a32a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-050736\" primary control-plane node in \"insufficient-storage-050736\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d64afa5-828a-43b3-a42a-5a3ca4631dbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763561786-21918 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce4afa03-f937-4126-81d8-cfebe52d1b39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6787a6a5-8a1e-440d-93e1-28b577be8410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-050736 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-050736 --output=json --layout=cluster: exit status 7 (307.458726ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-050736","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-050736","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:43:56.895860 1003123 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-050736" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-050736 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-050736 --output=json --layout=cluster: exit status 7 (299.37182ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-050736","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-050736","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:43:57.197480 1003190 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-050736" does not appear in /home/jenkins/minikube-integration/21918-860325/kubeconfig
	E1119 22:43:57.207740 1003190 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/insufficient-storage-050736/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-050736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-050736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-050736: (1.963655224s)
--- PASS: TestInsufficientStorage (14.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2367398125 start -p running-upgrade-770765 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2367398125 start -p running-upgrade-770765 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.501662651s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-770765 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-770765 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.247905699s)
helpers_test.go:175: Cleaning up "running-upgrade-770765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-770765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-770765: (2.765862736s)
--- PASS: TestRunningBinaryUpgrade (52.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (518.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.908243614s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-154655
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-154655: (1.477446348s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-154655 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-154655 status --format={{.Host}}: exit status 7 (83.49087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1119 22:46:38.001363  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.113515514s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-154655 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (112.746235ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-154655] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-154655
	    minikube start -p kubernetes-upgrade-154655 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1546552 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-154655 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-154655 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (3m13.864165854s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-154655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-154655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-154655: (2.575024233s)
--- PASS: TestKubernetesUpgrade (518.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3408920928 start -p missing-upgrade-290352 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3408920928 start -p missing-upgrade-290352 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.215922733s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-290352
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-290352
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-290352 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-290352 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.726139197s)
helpers_test.go:175: Cleaning up "missing-upgrade-290352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-290352
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-290352: (2.383650974s)
--- PASS: TestMissingContainerUpgrade (116.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482978 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-482978 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (100.882092ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-482978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482978 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482978 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.820473134s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-482978 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.027964683s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-482978 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-482978 status -o json: exit status 2 (595.465117ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-482978","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-482978
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-482978: (2.132083857s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482978 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.944148326s)
--- PASS: TestNoKubernetes/serial/Start (5.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21918-860325/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-482978 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-482978 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.259238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-482978
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-482978: (1.3921393s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482978 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482978 --driver=docker  --container-runtime=crio: (8.450520215s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-482978 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-482978 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.234815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.99252300 start -p stopped-upgrade-196185 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.99252300 start -p stopped-upgrade-196185 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.190496727s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.99252300 -p stopped-upgrade-196185 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.99252300 -p stopped-upgrade-196185 stop: (1.274743678s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-196185 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1119 22:46:54.908136  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-196185 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.539868473s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-196185
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-196185: (1.2377933s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestPause/serial/Start (81.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-743639 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1119 22:48:41.228913  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-743639 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.757555479s)
--- PASS: TestPause/serial/Start (81.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-743639 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-743639 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.853357085s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-334366 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-334366 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (201.129867ms)

                                                
                                                
-- stdout --
	* [false-334366] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:50:42.273994 1040126 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:50:42.274121 1040126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:50:42.274137 1040126 out.go:374] Setting ErrFile to fd 2...
	I1119 22:50:42.274142 1040126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:50:42.274792 1040126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-860325/.minikube/bin
	I1119 22:50:42.275824 1040126 out.go:368] Setting JSON to false
	I1119 22:50:42.277008 1040126 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16371,"bootTime":1763576271,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1119 22:50:42.277168 1040126 start.go:143] virtualization:  
	I1119 22:50:42.280785 1040126 out.go:179] * [false-334366] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 22:50:42.283179 1040126 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:50:42.283390 1040126 notify.go:221] Checking for updates...
	I1119 22:50:42.289528 1040126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:50:42.292754 1040126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-860325/kubeconfig
	I1119 22:50:42.295997 1040126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-860325/.minikube
	I1119 22:50:42.299009 1040126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 22:50:42.302017 1040126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:50:42.305777 1040126 config.go:182] Loaded profile config "kubernetes-upgrade-154655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:50:42.305967 1040126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:50:42.334182 1040126 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 22:50:42.334297 1040126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:50:42.402886 1040126 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 22:50:42.392777422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 22:50:42.402994 1040126 docker.go:319] overlay module found
	I1119 22:50:42.406246 1040126 out.go:179] * Using the docker driver based on user configuration
	I1119 22:50:42.409046 1040126 start.go:309] selected driver: docker
	I1119 22:50:42.409082 1040126 start.go:930] validating driver "docker" against <nil>
	I1119 22:50:42.409097 1040126 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:50:42.412712 1040126 out.go:203] 
	W1119 22:50:42.415586 1040126 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1119 22:50:42.418416 1040126 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-334366 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-334366" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:46:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-154655
contexts:
- context:
cluster: kubernetes-upgrade-154655
user: kubernetes-upgrade-154655
name: kubernetes-upgrade-154655
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-154655
user:
client-certificate: /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/kubernetes-upgrade-154655/client.crt
client-key: /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/kubernetes-upgrade-154655/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-334366

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-334366"

                                                
                                                
----------------------- debugLogs end: false-334366 [took: 3.546524846s] --------------------------------
helpers_test.go:175: Cleaning up "false-334366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-334366
--- PASS: TestNetworkPlugins/group/false (3.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (74.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m14.354544317s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (74.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m9.655683123s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-191961 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d7049a7e-c4f1-41aa-b250-36991037c143] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d7049a7e-c4f1-41aa-b250-36991037c143] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004137034s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-191961 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-191961 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-191961 --alsologtostderr -v=3: (12.078781131s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-018508 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bde3dd99-1e55-4b1c-bc02-a95506e986c0] Pending
helpers_test.go:352: "busybox" [bde3dd99-1e55-4b1c-bc02-a95506e986c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bde3dd99-1e55-4b1c-bc02-a95506e986c0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003639315s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-018508 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961: exit status 7 (76.660252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-191961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-191961 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.171768753s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191961 -n old-k8s-version-191961
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-018508 --alsologtostderr -v=3
E1119 22:56:54.908589  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-018508 --alsologtostderr -v=3: (12.259727633s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508: exit status 7 (69.211788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-018508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-018508 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.043361034s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018508 -n no-preload-018508
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mxfnk" [4bdd98f3-4299-4444-93ee-dbf5f3d503ed] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003901429s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mxfnk" [4bdd98f3-4299-4444-93ee-dbf5f3d503ed] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004162125s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-191961 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-191961 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hpp5l" [efd9f3bc-bdd5-4976-b338-561dd5577ab9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004274061s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m30.049436735s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hpp5l" [efd9f3bc-bdd5-4976-b338-561dd5577ab9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004195112s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-018508 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-018508 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1119 22:58:41.229056  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.165700663s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-044665 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e2c91413-2762-471a-bbcc-cb2b7e0ac3fc] Pending
helpers_test.go:352: "busybox" [e2c91413-2762-471a-bbcc-cb2b7e0ac3fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e2c91413-2762-471a-bbcc-cb2b7e0ac3fc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003527697s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-044665 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-044665 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-044665 --alsologtostderr -v=3: (12.159470999s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-841969 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bb3c0020-a370-4686-aa7a-f5c0e59492e9] Pending
helpers_test.go:352: "busybox" [bb3c0020-a370-4686-aa7a-f5c0e59492e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bb3c0020-a370-4686-aa7a-f5c0e59492e9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003374141s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-841969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-841969 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-841969 --alsologtostderr -v=3: (12.288353864s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665: exit status 7 (104.35349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-044665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-044665 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.055577115s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-044665 -n embed-certs-044665
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969: exit status 7 (126.95065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-841969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-841969 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.746093302s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841969 -n default-k8s-diff-port-841969
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z42jm" [a8e94848-aaa0-41d9-af7b-49bf2802f72f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004244504s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z42jm" [a8e94848-aaa0-41d9-af7b-49bf2802f72f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003970645s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-044665 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-044665 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9xf4k" [89d84645-3a4c-455f-95a0-a0770b7eff59] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004118928s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9xf4k" [89d84645-3a4c-455f-95a0-a0770b7eff59] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00408463s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-841969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.183052098s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-841969 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1119 23:01:24.446615  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:27.008361  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:32.130462  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:39.624484  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:39.630794  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:39.642065  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:39.663365  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:39.704670  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:39.786000  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:39.951175  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:40.272745  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:40.914485  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:42.196439  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:42.372625  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:44.757762  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:49.879978  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:01:54.908313  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m31.217724502s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-467060 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-467060 --alsologtostderr -v=3: (1.603615681s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060: exit status 7 (94.836954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-467060 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1119 23:02:20.625212  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-467060 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (16.08451774s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-467060 -n newest-cni-467060
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-467060 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1119 23:02:43.817725  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m25.337028757s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-334366 "pgrep -a kubelet"
I1119 23:02:55.788955  862175 config.go:182] Loaded profile config "auto-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-334366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wkpzc" [5769b828-dfe0-4d5a-8416-caad006f2a27] Pending
E1119 23:03:01.587506  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wkpzc" [5769b828-dfe0-4d5a-8416-caad006f2a27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.005969562s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-334366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1119 23:03:41.229375  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/functional-642533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.884314535s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vxsx4" [1f8d1d0b-8a00-44e1-87d6-83a3ca9b8d46] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005385074s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-334366 "pgrep -a kubelet"
I1119 23:04:02.339148  862175 config.go:182] Loaded profile config "calico-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-334366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h7ml5" [f46b2d82-98a1-46eb-90ee-b0c25961bef7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1119 23:04:05.739852  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-h7ml5" [f46b2d82-98a1-46eb-90ee-b0c25961bef7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004402932s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-334366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m27.1622745s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-334366 "pgrep -a kubelet"
I1119 23:04:42.504934  862175 config.go:182] Loaded profile config "custom-flannel-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-334366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x4nlm" [280d002a-8775-4b3a-bae5-7cf7b887e00f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1119 23:04:45.731117  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:45.737444  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:45.749132  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:45.770547  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:45.811867  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:45.893552  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:46.055215  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:46.376954  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:47.019153  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:48.300513  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-x4nlm" [280d002a-8775-4b3a-bae5-7cf7b887e00f] Running
E1119 23:04:50.862682  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011414618s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-334366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1119 23:05:26.708282  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (57.174035877s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8z7pz" [7317ef70-734e-4af5-97e1-f9fc0a8b078e] Running
E1119 23:06:07.670684  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00386914s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-334366 "pgrep -a kubelet"
I1119 23:06:11.937131  862175 config.go:182] Loaded profile config "kindnet-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-334366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dxhm7" [27d93d39-6cec-4a76-9553-d179b0b77520] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dxhm7" [27d93d39-6cec-4a76-9553-d179b0b77520] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003277922s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tjwdq" [fa1faec9-4b17-49a7-a0c7-3948b1676be6] Running
E1119 23:06:21.877983  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003704974s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-334366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-334366 "pgrep -a kubelet"
I1119 23:06:24.491701  862175 config.go:182] Loaded profile config "flannel-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-334366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l79cv" [946c151a-2bfd-41d7-a377-054c84c8905e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l79cv" [946c151a-2bfd-41d7-a377-054c84c8905e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004402357s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-334366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1119 23:06:49.581442  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/old-k8s-version-191961/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:06:54.908161  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/addons-441523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m2.874845862s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1119 23:07:07.351720  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/no-preload-018508/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:29.592217  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/default-k8s-diff-port-841969/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-334366 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m25.089973013s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-334366 "pgrep -a kubelet"
I1119 23:07:49.595359  862175 config.go:182] Loaded profile config "enable-default-cni-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-334366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gmztl" [b42a3b93-753d-43fe-80c4-3715200ad467] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gmztl" [b42a3b93-753d-43fe-80c4-3715200ad467] Running
E1119 23:07:56.092712  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:56.099019  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:56.110436  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:56.131968  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:56.173526  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:56.255815  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:56.417169  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:56.738661  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:57.380348  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:07:58.662716  862175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/auto-334366/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004064388s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-334366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-334366 "pgrep -a kubelet"
I1119 23:08:25.024935  862175 config.go:182] Loaded profile config "bridge-334366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-334366 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gp6kg" [0c5bba4e-6efc-4970-bfb0-7550430045a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gp6kg" [0c5bba4e-6efc-4970-bfb0-7550430045a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003670909s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-334366 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-334366 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-739940 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-739940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-739940
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-553369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-553369
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-334366 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-334366" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:46:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-154655
contexts:
- context:
cluster: kubernetes-upgrade-154655
user: kubernetes-upgrade-154655
name: kubernetes-upgrade-154655
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-154655
user:
client-certificate: /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/kubernetes-upgrade-154655/client.crt
client-key: /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/kubernetes-upgrade-154655/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-334366

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-334366"

                                                
                                                
----------------------- debugLogs end: kubenet-334366 [took: 3.588997111s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-334366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-334366
--- SKIP: TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-334366 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-334366" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-860325/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:46:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-154655
contexts:
- context:
cluster: kubernetes-upgrade-154655
user: kubernetes-upgrade-154655
name: kubernetes-upgrade-154655
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-154655
user:
client-certificate: /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/kubernetes-upgrade-154655/client.crt
client-key: /home/jenkins/minikube-integration/21918-860325/.minikube/profiles/kubernetes-upgrade-154655/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-334366

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-334366" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334366"

                                                
                                                
----------------------- debugLogs end: cilium-334366 [took: 3.812019054s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-334366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-334366
--- SKIP: TestNetworkPlugins/group/cilium (3.98s)

                                                
                                    
Copied to clipboard